# Introduction

Welcome to the official Flowise documentation

<figure><img src="/files/dxsQhGeHzMNBUFAp3GVg" alt=""><figcaption></figcaption></figure>

Flowise is an open source generative AI development platform for building AI Agents and LLM workflows.

It offers a complete solution that includes:

* [x] Visual Builder
* [x] Tracing & Analytics
* [x] Evaluations
* [x] Human in the Loop
* [x] API, CLI, SDK, Embedded Chatbot
* [x] Teams & Workspaces

There are 3 main visual builders namely:

* Assistant
* Chatflow
* Agentflow

## Assistant

Assistant is the most beginner-friendly way of creating an AI Agent. Users can create chat assistant that is able to follow instructions, use tools when necessary, and retrieve knowledge base from uploaded files ([RAG](https://en.wikipedia.org/wiki/Retrieval-augmented_generation)) to respond to user queries.

<figure><picture><source srcset="/files/KA9W23tvHUJBDRIDAdiK" media="(prefers-color-scheme: dark)"><img src="/files/4XRTpgSZmwIt7OVN9T9l" alt=""></picture><figcaption></figcaption></figure>

## Chatflow

Chatflow is designed to build single-agent systems, chatbots and simple LLM flows. It is more flexible than Assistant. Users can use advance techniques like Graph RAG, Reranker, Retriever, etc.

<figure><picture><source srcset="/files/VTvjbLBSByR4XsLmpRUC" media="(prefers-color-scheme: dark)"><img src="/files/dQAf3a8SobjubDHDrlg2" alt=""></picture><figcaption></figcaption></figure>

## Agentflow

Agentflow is the superset of Chatflow & Assistant. It can be used to create chat assistant, single-agent system, multi-agent systems, and complex workflow orchestration. Learn more [Agentflow V2](/using-flowise/agentflowv2)

<figure><picture><source srcset="/files/kjYmLXN9awAFkQhfik5t" media="(prefers-color-scheme: dark)"><img src="/files/YdDHGnocuGjcnNLTmKIH" alt=""></picture><figcaption></figcaption></figure>

## Flowise Capabilities

| Feature Area                 | Flowise Capabilities                                                                                                |
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------- |
| Orchestration                | Visual editor, supports open-source & proprietary models, expressions, custom code, branching/looping/routing logic |
| Data Ingestion & Integration | Connects to 100+ sources, tools, vector databases, memories                                                         |
| Monitoring                   | Execution logs, visual debugging, external log streaming                                                            |
| Deployment                   | Self-hosted options, air-gapped deploy                                                                              |
| Data Processing              | Data transforms, filters, aggregates, custom code, RAG indexing pipelines                                           |
| Memory & Planning            | Various memory optimization technique and integrations                                                              |
| MCP Integration              | MCP client/server nodes, tool listing, SSE, auth support                                                            |
| Safety & Control             | Input moderation & output post-processing                                                                           |
| API, SDK, CLI                | API access, JS/Python SDK, Command Line Interface                                                                   |
| Embedded & Share Chatbot     | Customizable embedded chat widget and component                                                                     |
| Templates & Components       | Template marketplace, reusable components                                                                           |
| Security Controls            | RBAC, SSO, encrypted creds, secret managers, rate limit, restricted domains                                         |
| Scalability                  | Vertical/horizontal scale, high throughput/workflow load                                                            |
| Evaluations                  | Datasets, Evaluators and Evaluations                                                                                |
| Community Support            | Active community forum                                                                                              |
| Vendor Support               | SLA support, consultations, fixed/deterministic pricing                                                             |

## Contributing

If you want to help this project, please consider reviewing the [Contribution Guide](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md).

## Need Help?

For support and further discussion, head over to our [Discord](https://discord.gg/jbaHfsRVBW) server.


# Get Started

***

## Cloud

Self-hosting requires more technical skill to setup instance, backing up database and maintaning updates. If you aren't experienced at managing servers and just want to use the webapp, we recommend using [Flowise Cloud](https://cloud.flowiseai.com).

## Quick Start

{% hint style="info" %}
Pre-requisite: ensure [NodeJS](https://nodejs.org/en/download) is installed on machine. Node `v18.15.0` or `v20` and above is supported.
{% endhint %}

Install Flowise locally using NPM.

1. Install Flowise:

```bash
npm install -g flowise
```

You can also install a specific version. Refer to available [versions](https://www.npmjs.com/package/flowise?activeTab=versions).

```
npm install -g flowise@x.x.x
```

2. Start Flowise:

```bash
npx flowise start
```

3. Open: <http://localhost:3000>

***

## Docker

There are two ways to deploy Flowise with Docker. First, git clone the project: <https://github.com/FlowiseAI/Flowise>

### Docker Compose

1. Go to `docker folder` at the root of the project
2. Copy the `.env.example` file and paste it as another file named `.env`
3. Run:

```bash
docker compose up -d
```

4. Open: <http://localhost:3000>
5. You can bring the containers down by running:

```bash
docker compose stop
```

### Docker Image

1. Build the image:

```bash
docker build --no-cache -t flowise .
```

2. Run image:

```bash
docker run -d --name flowise -p 3000:3000 flowise
```

3. Stop image:

```bash
docker stop flowise
```

***

## For Developers

Flowise has 4 different modules in a single mono repository:

* **Server**: Node backend to serve API logics
* **UI**: React frontend
* **Components**: Integration components
* **Api Documentation**: Swagger spec for Flowise APIs

### Prerequisite

Install [PNPM](https://pnpm.io/installation).

```bash
npm i -g pnpm
```

### Setup 1

Simple setup using PNPM:

1. Clone the repository

```bash
git clone https://github.com/FlowiseAI/Flowise.git
```

2. Go into repository folder

```bash
cd Flowise
```

3. Install all dependencies of all modules:

```bash
pnpm install
```

4. Build the code:

```bash
pnpm build
```

Start the app at <http://localhost:3000>

```bash
pnpm start
```

### Setup 2

Step-by-step setup for project contributors:

1. Fork the official [Flowise Github Repository](https://github.com/FlowiseAI/Flowise)
2. Clone your forked repository
3. Create a new branch, see [guide](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-and-deleting-branches-within-your-repository). Naming conventions:
   * For feature branch: `feature/<Your New Feature>`
   * For bug fix branch: `bugfix/<Your New Bugfix>`.
4. Switch to the branch you just created
5. Go into repository folder:

```bash
cd Flowise
```

6. Install all dependencies of all modules:

```bash
pnpm install
```

7. Build the code:

```bash
pnpm build
```

8. Start the app at <http://localhost:3000>

```bash
pnpm start
```

9. For development build:

* Create `.env` file and specify the `PORT` (refer to `.env.example`) in `packages/ui`
* Create `.env` file and specify the `PORT` (refer to `.env.example`) in `packages/server`

```bash
pnpm dev
```

* Any changes made in `packages/ui` or `packages/server` will be reflected at [http://localhost:8080](http://localhost:8080/)
* For changes made in `packages/components`, you will need to build again to pickup the changes
* After making all the changes, run:

  ```bash
  pnpm build
  ```

  and

  ```bash
  pnpm start
  ```

  to make sure everything works fine in production.

***

## For Enterprise

Before starting the app, enterprise users are required to fill in the values for Enterprise Parameters in the `.env` file. Refer to `.env.example` for the required changes.

Reach out to <support@flowiseai.com> for the value of following env variables:

```
LICENSE_URL
FLOWISE_EE_LICENSE_KEY
```

***

## Learn More

In this video tutorial, Leon provides an introduction to Flowise and explains how to set it up on your local machine.

{% embed url="<https://youtu.be/nqAK_L66sIQ>" %}

## Community Guide

* [Introduction to \[Practical\] Building LLM Applications with Flowise / LangChain](https://volcano-ice-cd6.notion.site/Introduction-to-Practical-Building-LLM-Applications-with-Flowise-LangChain-03d6d75bfd20495d96dfdae964bea5a5)
* [Flowise / LangChainによるLLMアプリケーション構築\[実践\]入門](https://volcano-ice-cd6.notion.site/Flowise-LangChain-LLM-e106bb0f7e2241379aad8fa428ee064a)


# Contribution Guide

Learn how to contribute to this project

***

We appreciate all contributions! No matter your skill level or technical background, you can help this project grow. Here are a few ways to contribute:

## ⭐ Star

Star and share the [Github Repo](https://github.com/FlowiseAI/Flowise).

## 🙌 Share Chatflow

Yes! Sharing how you use Flowise is a way of contribution. Export your chatflow as JSON, attach a screenshot and share it in [Show and Tell section](https://github.com/FlowiseAI/Flowise/discussions/categories/show-and-tell).

## 💡 Ideas

We welcome ideas for new features, apps integrations. Submit your suggestions to the [Ideas section](https://github.com/FlowiseAI/Flowise/discussions/categories/ideas).

## 🙋 Q\&A

Want to learn more? Search for answers to any questions in the [Q\&A section](https://github.com/FlowiseAI/Flowise/discussions/categories/q-a). If you can't find one, don't hesitate to create a new question. It might help others who have similar questions.

## 🐞 Report Bugs

Found an issue? [Report it](https://github.com/FlowiseAI/Flowise/issues/new/choose).

## 📖 Contribute to Docs

1. Fork the official [Flowise Docs Repo](https://github.com/FlowiseAI/FlowiseDocs)
2. Clone your forked repository
3. Create a new branch
4. Switch to the branch you just created
5. Go into repository folder

   ```bash
   cd FlowiseDocs
   ```
6. Make changes
7. Commit changes and submit Pull Request from forked branch pointing to [FlowiseDocs main](https://github.com/FlowiseAI/FlowiseDocs)

## 👨‍💻 Contribute to Code

To learn how to contribute code, go to the [For Developers](/getting-started#setup-2) section and follow the instructions.

If you are contributing to a new node integration, read the [Building Node](/contributing/building-node) guide.

## 🏷️ Pull Request process

A member of the FlowiseAI team will automatically be notified/assigned when you open a pull request. You can also reach out to us on [Discord](https://discord.gg/jbaHfsRVBW).

## 📜 Code of Conduct

This project and everyone participating in it are governed by the Code of Conduct which can be found in the [file](https://github.com/FlowiseAI/Flowise/blob/main/CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.

Please report unacceptable behavior to <hello@flowiseai.com>.


# Building Node

### Install Git

First, install Git and clone Flowise repository. You can follow the steps from the [Get Started](broken://pages/nuiTj70UthEELOvhLrSb#for-developers) guide.

### Structure

Flowise separate every node integration under the folder `packages/components/nodes`. Let's try to create a simple Tool!

### Create Calculator Tool

Create a new folder named `Calculator` under the `packages/components/nodes/tools` folder. Then create a new file named `Calculator.ts`. Inside the file, we will first write the base class.

```javascript
import { INode } from '../../../src/Interface'
import { getBaseClasses } from '../../../src/utils'

class Calculator_Tools implements INode {
    label: string
    name: string
    version: number
    description: string
    type: string
    icon: string
    category: string
    author: string
    baseClasses: string[]

    constructor() {
        this.label = 'Calculator'
        this.name = 'calculator'
        this.version = 1.0
        this.type = 'Calculator'
        this.icon = 'calculator.svg'
        this.category = 'Tools'
        this.author = 'Your Name'
        this.description = 'Perform calculations on response'
        this.baseClasses = [this.type, ...getBaseClasses(Calculator)]
    }
}

module.exports = { nodeClass: Calculator_Tools }
```

Every node will implements the `INode` base class. Breakdown of what each property means:

<table><thead><tr><th width="271">Property</th><th>Description</th></tr></thead><tbody><tr><td>label</td><td>The name of the node that appears on the UI</td></tr><tr><td>name</td><td>The name that is used by code. Must be <strong>camelCase</strong></td></tr><tr><td>version</td><td>Version of the node</td></tr><tr><td>type</td><td>Usually the same as label. To define which node can be connected to this specific type on UI</td></tr><tr><td>icon</td><td>Icon of the node</td></tr><tr><td>category</td><td>Category of the node</td></tr><tr><td>author</td><td>Creator of the node</td></tr><tr><td>description</td><td>Node description</td></tr><tr><td>baseClasses</td><td>The base classes from the node, since a node can extends from a base component. Used to define which node can be connected to this node on UI</td></tr></tbody></table>

### Define Class

Now the component class is partially finished, we can go ahead to define the actual Tool class, in this case - `Calculator`.

Create a new file under the same `Calculator` folder, and named as `core.ts`

```javascript
import { Parser } from "expr-eval"
import { Tool } from "@langchain/core/tools"

export class Calculator extends Tool {
    name = "calculator"
    description = `Useful for getting the result of a math expression. The input to this tool should be a valid mathematical expression that could be executed by a simple calculator.`
 
    async _call(input: string) {
        try {
            return Parser.evaluate(input).toString()
        } catch (error) {
            return "I don't know how to do that."
        }
    }
}
```

### Finishing

Head back to the `Calculator.ts` file, we can finish this up by having the `async init` function. In this function, we will initialize the Calculator class we created above. When the flow is being executed, the `init` function in each node will be called, and the `_call` function will be executed when LLM decides to call this tool.

```javascript
import { INode } from '../../../src/Interface'
import { getBaseClasses } from '../../../src/utils'
import { Calculator } from './core'

class Calculator_Tools implements INode {
    label: string
    name: string
    version: number
    description: string
    type: string
    icon: string
    category: string
    author: string
    baseClasses: string[]

    constructor() {
        this.label = 'Calculator'
        this.name = 'calculator'
        this.version = 1.0
        this.type = 'Calculator'
        this.icon = 'calculator.svg'
        this.category = 'Tools'
        this.author = 'Your Name'
        this.description = 'Perform calculations on response'
        this.baseClasses = [this.type, ...getBaseClasses(Calculator)]
    }
    
 
    async init() {
        return new Calculator()
    }
}

module.exports = { nodeClass: Calculator_Tools }
```

### Build and Run

In the `.env` file inside `packages/server`, create a new env variable:

```javascript
SHOW_COMMUNITY_NODES=true
```

Now we can use `pnpm build` and `pnpm start` to bring the component alive!

<figure><img src="/files/eNE0iqOSUwlbsO8wW3hm" alt=""><figcaption></figcaption></figure>


# API Reference

Using Flowise public API, you can programmatically execute many of the same tasks as you can in the GUI. This section introduces Flowise REST API.

* [Assistants](/api-reference/assistants)
* [Attachments](/api-reference/attachments)
* [Chat Message](/api-reference/chat-message)
* [Chatflows](/api-reference/chatflows)
* [Document Store](/api-reference/document-store)
* [Feedback](/api-reference/feedback)
* [Leads](/api-reference/leads)
* [Ping](/api-reference/ping)
* [Prediction](/api-reference/prediction)
* [Tools](/api-reference/tools)
* [Upsert History](/api-reference/upsert-history)
* [Variables](/api-reference/variables)
* [Vector Upsert](/api-reference/vector-upsert)


# Assistants

## Create a new assistant

> Create a new assistant with the provided details

```json
{"tags":[{"name":"assistants"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Assistant":{"type":"object","properties":{"id":{"type":"string"},"details":{"type":"object","properties":{"id":{"type":"string"},"name":{"type":"string"},"description":{"type":"string"},"model":{"type":"string"},"instructions":{"type":"string"},"temperature":{"type":"number"},"top_p":{"type":"number"},"tools":{"type":"array","items":{"type":"string"}},"tool_resources":{"type":"object","additionalProperties":{"type":"object"}}}},"credential":{"type":"string"},"iconSrc":{"type":"string"},"createdDate":{"type":"string","format":"date-time"},"updatedDate":{"type":"string","format":"date-time"}}}}},"paths":{"/assistants":{"post":{"tags":["assistants"],"operationId":"createAssistant","summary":"Create a new assistant","description":"Create a new assistant with the provided details","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Assistant"}}},"required":true},"responses":{"200":{"description":"Assistant created successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Assistant"}}}},"400":{"description":"Invalid input provided"},"422":{"description":"Validation exception"}}}}}}
```

## Get assistant by ID

> Retrieve a specific assistant by ID

```json
{"tags":[{"name":"assistants"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Assistant":{"type":"object","properties":{"id":{"type":"string"},"details":{"type":"object","properties":{"id":{"type":"string"},"name":{"type":"string"},"description":{"type":"string"},"model":{"type":"string"},"instructions":{"type":"string"},"temperature":{"type":"number"},"top_p":{"type":"number"},"tools":{"type":"array","items":{"type":"string"}},"tool_resources":{"type":"object","additionalProperties":{"type":"object"}}}},"credential":{"type":"string"},"iconSrc":{"type":"string"},"createdDate":{"type":"string","format":"date-time"},"updatedDate":{"type":"string","format":"date-time"}}}}},"paths":{"/assistants/{id}":{"get":{"tags":["assistants"],"summary":"Get assistant by ID","description":"Retrieve a specific assistant by ID","operationId":"getAssistantById","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Assistant ID"}],"responses":{"200":{"description":"Successful operation","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Assistant"}}}},"400":{"description":"The specified ID is invalid"},"404":{"description":"Assistant not found"},"500":{"description":"Internal error"}}}}}}
```

## Update assistant details

> Update the details of an existing assistant

```json
{"tags":[{"name":"assistants"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Assistant":{"type":"object","properties":{"id":{"type":"string"},"details":{"type":"object","properties":{"id":{"type":"string"},"name":{"type":"string"},"description":{"type":"string"},"model":{"type":"string"},"instructions":{"type":"string"},"temperature":{"type":"number"},"top_p":{"type":"number"},"tools":{"type":"array","items":{"type":"string"}},"tool_resources":{"type":"object","additionalProperties":{"type":"object"}}}},"credential":{"type":"string"},"iconSrc":{"type":"string"},"createdDate":{"type":"string","format":"date-time"},"updatedDate":{"type":"string","format":"date-time"}}}}},"paths":{"/assistants/{id}":{"put":{"tags":["assistants"],"summary":"Update assistant details","description":"Update the details of an existing assistant","operationId":"updateAssistant","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Assistant ID"}],"requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Assistant"}}}},"responses":{"200":{"description":"Assistant updated successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Assistant"}}}},"400":{"description":"The specified ID is invalid or body is missing"},"404":{"description":"Assistant not found"},"500":{"description":"Internal error"}}}}}}
```

## Delete an assistant

> Delete an assistant by ID

```json
{"tags":[{"name":"assistants"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}}},"paths":{"/assistants/{id}":{"delete":{"tags":["assistants"],"summary":"Delete an assistant","description":"Delete an assistant by ID","operationId":"deleteAssistant","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Assistant ID"}],"responses":{"200":{"description":"Assistant deleted successfully"},"400":{"description":"The specified ID is invalid"},"404":{"description":"Assistant not found"},"500":{"description":"Internal error"}}}}}}
```


# Attachments

## Create attachments array

> Return contents of the files in plain string format

```json
{"tags":[{"name":"attachments"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"CreateAttachmentResponse":{"type":"object","properties":{"name":{"type":"string","description":"Name of the file"},"mimeType":{"type":"string","description":"Mime type of the file"},"size":{"type":"string","description":"Size of the file"},"content":{"type":"string","description":"Content of the file in string format"}}}}},"paths":{"/attachments/{chatflowId}/{chatId}":{"post":{"tags":["attachments"],"operationId":"createAttachment","summary":"Create attachments array","description":"Return contents of the files in plain string format","parameters":[{"in":"path","name":"chatflowId","required":true,"schema":{"type":"string"},"description":"Chatflow ID"},{"in":"path","name":"chatId","required":true,"schema":{"type":"string"},"description":"Chat ID"}],"requestBody":{"content":{"multipart/form-data":{"schema":{"type":"object","properties":{"files":{"type":"array","items":{"type":"string","format":"binary"},"description":"Files to be uploaded"},"base64":{"type":"boolean","default":false,"description":"Return contents of the files in base64 format"}},"required":["files"]}}},"required":true},"responses":{"200":{"description":"Attachments created successfully","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/CreateAttachmentResponse"}}}}},"400":{"description":"Invalid input provided"},"404":{"description":"Chatflow or ChatId not found"},"422":{"description":"Validation error"},"500":{"description":"Internal server error"}}}}}}
```


# Chat Message

## List all chat messages

> Retrieve all chat messages for a specific chatflow.

```json
{"tags":[{"name":"chatmessage"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"ChatMessage":{"type":"object","properties":{"id":{"type":"string","format":"uuid"},"role":{"type":"string","enum":["apiMessage","userMessage"]},"chatflowid":{"type":"string","format":"uuid"},"content":{"type":"string"},"sourceDocuments":{"type":"array","nullable":true,"items":{"$ref":"#/components/schemas/Document"}},"usedTools":{"type":"array","nullable":true,"items":{"$ref":"#/components/schemas/UsedTool"}},"fileAnnotations":{"type":"array","nullable":true,"items":{"$ref":"#/components/schemas/FileAnnotation"}},"agentReasoning":{"type":"array","nullable":true,"items":{"$ref":"#/components/schemas/AgentReasoning"}},"fileUploads":{"type":"array","nullable":true,"items":{"$ref":"#/components/schemas/FileUpload"}},"action":{"type":"array","nullable":true,"items":{"$ref":"#/components/schemas/Action"}},"chatType":{"type":"string","enum":["INTERNAL","EXTERNAL"]},"chatId":{"type":"string"},"memoryType":{"type":"string","nullable":true},"sessionId":{"type":"string","nullable":true},"createdDate":{"type":"string","format":"date-time"},"leadEmail":{"type":"string","nullable":true}}},"Document":{"type":"object","properties":{"pageContent":{"type":"string"},"metadata":{"type":"object","additionalProperties":{"type":"string"}}}},"UsedTool":{"type":"object","properties":{"tool":{"type":"string"},"toolInput":{"type":"object","additionalProperties":{"type":"string"}},"toolOutput":{"type":"string"}}},"FileAnnotation":{"type":"object","properties":{"filePath":{"type":"string"},"fileName":{"type":"string"}}},"AgentReasoning":{"type":"object","properties":{"agentName":{"type":"string"},"messages":{"type":"array","items":{"type":"string"}},"nodeName":{"type":"string"},"nodeId":{"type":"string"},"usedTools":{"type":"array","items":{"$ref":"#/components/schemas/UsedTool"}},"sourceDocuments":{"type":"array","items":{"$ref":"#/components/schemas/Document"}},"state":{"type":"object","additionalProperties":{"type":"string"}}}},"FileUpload":{"type":"object","properties":{"data":{"type":"string"},"type":{"type":"string"},"name":{"type":"string"},"mime":{"type":"string"}}},"Action":{"type":"object","properties":{"id":{"type":"string","format":"uuid"},"mapping":{"type":"object","properties":{"approve":{"type":"string"},"reject":{"type":"string"},"toolCalls":{"type":"array"}}},"elements":{"type":"array"}}}}},"paths":{"/chatmessage/{id}":{"get":{"tags":["chatmessage"],"operationId":"getAllChatMessages","summary":"List all chat messages","description":"Retrieve all chat messages for a specific chatflow.","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Chatflow ID"},{"in":"query","name":"chatType","schema":{"type":"string","enum":["INTERNAL","EXTERNAL"]},"description":"Filter by chat type"},{"in":"query","name":"order","schema":{"type":"string","enum":["ASC","DESC"]},"description":"Sort order"},{"in":"query","name":"chatId","schema":{"type":"string"},"description":"Filter by chat ID"},{"in":"query","name":"memoryType","schema":{"type":"string"},"description":"Filter by memory type"},{"in":"query","name":"sessionId","schema":{"type":"string"},"description":"Filter by session ID"},{"in":"query","name":"startDate","schema":{"type":"string","format":"date-time"},"description":"Filter by start date"},{"in":"query","name":"endDate","schema":{"type":"string","format":"date-time"},"description":"Filter by end date"},{"in":"query","name":"feedback","schema":{"type":"boolean"},"description":"Filter by feedback"},{"in":"query","name":"feedbackType","schema":{"type":"string","enum":["THUMBS_UP","THUMBS_DOWN"]},"description":"Filter by feedback type. Only applicable if feedback is true"}],"responses":{"200":{"description":"A list of chat messages","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/ChatMessage"}}}}},"500":{"description":"Internal error"}}}}}}
```

## Delete all chat messages

> Delete all chat messages for a specific chatflow.

```json
{"tags":[{"name":"chatmessage"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}}},"paths":{"/chatmessage/{id}":{"delete":{"tags":["chatmessage"],"operationId":"removeAllChatMessages","summary":"Delete all chat messages","description":"Delete all chat messages for a specific chatflow.","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Chatflow ID"},{"in":"query","name":"chatId","schema":{"type":"string"},"description":"Filter by chat ID"},{"in":"query","name":"memoryType","schema":{"type":"string"},"description":"Filter by memory type"},{"in":"query","name":"sessionId","schema":{"type":"string"},"description":"Filter by session ID"},{"in":"query","name":"chatType","schema":{"type":"string","enum":["INTERNAL","EXTERNAL"]},"description":"Filter by chat type"},{"in":"query","name":"startDate","schema":{"type":"string"},"description":"Filter by start date"},{"in":"query","name":"endDate","schema":{"type":"string"},"description":"Filter by end date"},{"in":"query","name":"feedbackType","schema":{"type":"string","enum":["THUMBS_UP","THUMBS_DOWN"]},"description":"Filter by feedback type"},{"in":"query","name":"hardDelete","schema":{"type":"boolean"},"description":"If hardDelete is true, messages will be deleted from the third party service as well"}],"responses":{"200":{"description":"Chat messages deleted successfully"},"400":{"description":"Invalid parameters"},"404":{"description":"Chat messages not found"},"500":{"description":"Internal error"}}}}}}
```


# Chatflows

## List all chatflows

> Retrieve a list of all chatflows

```json
{"tags":[{"name":"chatflows"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Chatflow":{"type":"object","properties":{"id":{"type":"string"},"name":{"type":"string"},"flowData":{"type":"string"},"deployed":{"type":"boolean"},"isPublic":{"type":"boolean"},"apikeyid":{"type":"string"},"chatbotConfig":{"type":"string"},"apiConfig":{"type":"string"},"analytic":{"type":"string"},"speechToText":{"type":"string"},"category":{"type":"string"},"type":{"type":"string","enum":["CHATFLOW","MULTIAGENT"]},"createdDate":{"type":"string","format":"date-time"},"updatedDate":{"type":"string","format":"date-time"}}}}},"paths":{"/chatflows":{"get":{"tags":["chatflows"],"summary":"List all chatflows","description":"Retrieve a list of all chatflows","operationId":"listChatflows","responses":{"200":{"description":"Successful operation","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/Chatflow"}}}}},"500":{"description":"Internal error"}}}}}}
```

## Get chatflow by ID

> Retrieve a specific chatflow by ID

```json
{"tags":[{"name":"chatflows"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Chatflow":{"type":"object","properties":{"id":{"type":"string"},"name":{"type":"string"},"flowData":{"type":"string"},"deployed":{"type":"boolean"},"isPublic":{"type":"boolean"},"apikeyid":{"type":"string"},"chatbotConfig":{"type":"string"},"apiConfig":{"type":"string"},"analytic":{"type":"string"},"speechToText":{"type":"string"},"category":{"type":"string"},"type":{"type":"string","enum":["CHATFLOW","MULTIAGENT"]},"createdDate":{"type":"string","format":"date-time"},"updatedDate":{"type":"string","format":"date-time"}}}}},"paths":{"/chatflows/{id}":{"get":{"tags":["chatflows"],"summary":"Get chatflow by ID","description":"Retrieve a specific chatflow by ID","operationId":"getChatflowById","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Chatflow ID"}],"responses":{"200":{"description":"Successful operation","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Chatflow"}}}},"400":{"description":"The specified ID is invalid"},"404":{"description":"Chatflow not found"},"500":{"description":"Internal error"}}}}}}
```

## Get chatflow by API key

> Retrieve a chatflow using an API key

```json
{"tags":[{"name":"chatflows"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Chatflow":{"type":"object","properties":{"id":{"type":"string"},"name":{"type":"string"},"flowData":{"type":"string"},"deployed":{"type":"boolean"},"isPublic":{"type":"boolean"},"apikeyid":{"type":"string"},"chatbotConfig":{"type":"string"},"apiConfig":{"type":"string"},"analytic":{"type":"string"},"speechToText":{"type":"string"},"category":{"type":"string"},"type":{"type":"string","enum":["CHATFLOW","MULTIAGENT"]},"createdDate":{"type":"string","format":"date-time"},"updatedDate":{"type":"string","format":"date-time"}}}}},"paths":{"/chatflows/apikey/{apikey}":{"get":{"tags":["chatflows"],"summary":"Get chatflow by API key","description":"Retrieve a chatflow using an API key","operationId":"getChatflowByApiKey","parameters":[{"in":"path","name":"apikey","required":true,"schema":{"type":"string"},"description":"API key associated with the chatflow"}],"responses":{"200":{"description":"Successful operation","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Chatflow"}}}},"400":{"description":"The specified API key is invalid"},"404":{"description":"Chatflow not found"},"500":{"description":"Internal error"}}}}}}
```

## Update chatflow details

> Update the details of an existing chatflow

```json
{"tags":[{"name":"chatflows"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Chatflow":{"type":"object","properties":{"id":{"type":"string"},"name":{"type":"string"},"flowData":{"type":"string"},"deployed":{"type":"boolean"},"isPublic":{"type":"boolean"},"apikeyid":{"type":"string"},"chatbotConfig":{"type":"string"},"apiConfig":{"type":"string"},"analytic":{"type":"string"},"speechToText":{"type":"string"},"category":{"type":"string"},"type":{"type":"string","enum":["CHATFLOW","MULTIAGENT"]},"createdDate":{"type":"string","format":"date-time"},"updatedDate":{"type":"string","format":"date-time"}}}}},"paths":{"/chatflows/{id}":{"put":{"tags":["chatflows"],"summary":"Update chatflow details","description":"Update the details of an existing chatflow","operationId":"updateChatflow","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Chatflow ID"}],"requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Chatflow"}}}},"responses":{"200":{"description":"Chatflow updated successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Chatflow"}}}},"400":{"description":"The specified ID is invalid or body is missing"},"404":{"description":"Chatflow not found"},"500":{"description":"Internal error"}}}}}}
```

## Delete a chatflow

> Delete a chatflow by ID

```json
{"tags":[{"name":"chatflows"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}}},"paths":{"/chatflows/{id}":{"delete":{"tags":["chatflows"],"summary":"Delete a chatflow","description":"Delete a chatflow by ID","operationId":"deleteChatflow","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Chatflow ID"}],"responses":{"200":{"description":"Chatflow deleted successfully"},"400":{"description":"The specified ID is invalid"},"404":{"description":"Chatflow not found"},"500":{"description":"Internal error"}}}}}}
```

## Create a new chatflow

> Create a new chatflow with the provided details

```json
{"tags":[{"name":"chatflows"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Chatflow":{"type":"object","properties":{"id":{"type":"string"},"name":{"type":"string"},"flowData":{"type":"string"},"deployed":{"type":"boolean"},"isPublic":{"type":"boolean"},"apikeyid":{"type":"string"},"chatbotConfig":{"type":"string"},"apiConfig":{"type":"string"},"analytic":{"type":"string"},"speechToText":{"type":"string"},"category":{"type":"string"},"type":{"type":"string","enum":["CHATFLOW","MULTIAGENT"]},"createdDate":{"type":"string","format":"date-time"},"updatedDate":{"type":"string","format":"date-time"}}}}},"paths":{"/chatflows":{"post":{"tags":["chatflows"],"operationId":"createChatflow","summary":"Create a new chatflow","description":"Create a new chatflow with the provided details","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Chatflow"}}},"required":true},"responses":{"200":{"description":"Chatflow created successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Chatflow"}}}},"400":{"description":"Invalid input provided"},"422":{"description":"Validation exception"}}}}}}
```


# Document Store

## Get a specific document store

> Retrieves details of a specific document store by its ID

```json
{"tags":[{"name":"document-store"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"DocumentStore":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the document store"},"name":{"type":"string","description":"Name of the document store"},"description":{"type":"string","description":"Description of the document store"},"loaders":{"type":"string","description":"Loaders associated with the document store, stored as JSON string"},"whereUsed":{"type":"string","description":"Places where the document store is used, stored as JSON string"},"status":{"type":"string","enum":["EMPTY","SYNC","SYNCING","STALE","NEW","UPSERTING","UPSERTED"],"description":"Status of the document store"},"vectorStoreConfig":{"type":"string","description":"Configuration for the vector store, stored as JSON string"},"embeddingConfig":{"type":"string","description":"Configuration for the embedding, stored as JSON string"},"recordManagerConfig":{"type":"string","description":"Configuration for the record manager, stored as JSON string"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the document store was created"},"updatedDate":{"type":"string","format":"date-time","description":"Date and time when the document store was last updated"}}}}},"paths":{"/document-store/store/{id}":{"get":{"tags":["document-store"],"summary":"Get a specific document store","description":"Retrieves details of a specific document store by its ID","operationId":"getDocumentStoreById","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string","format":"uuid"},"description":"Document Store ID"}],"responses":{"200":{"description":"Successfully retrieved document store","content":{"application/json":{"schema":{"$ref":"#/components/schemas/DocumentStore"}}}},"404":{"description":"Document store not found"},"500":{"description":"Internal server error"}}}}}}
```

## Get chunks from a specific document loader

> Get chunks from a specific document loader within a document store

```json
{"tags":[{"name":"document-store"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"DocumentStoreFileChunkPagedResponse":{"type":"object","properties":{"chunks":{"type":"array","items":{"$ref":"#/components/schemas/DocumentStoreFileChunk"}},"count":{"type":"number"},"file":{"$ref":"#/components/schemas/DocumentStoreLoaderForPreview"},"currentPage":{"type":"number"},"storeName":{"type":"string"},"description":{"type":"string"}}},"DocumentStoreFileChunk":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the file chunk"},"docId":{"type":"string","format":"uuid","description":"Document ID within the store"},"storeId":{"type":"string","format":"uuid","description":"Document Store ID"},"chunkNo":{"type":"integer","description":"Chunk number within the document"},"pageContent":{"type":"string","description":"Content of the chunk"},"metadata":{"type":"string","description":"Metadata associated with the chunk"}}},"DocumentStoreLoaderForPreview":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the document store loader"},"loaderId":{"type":"string","description":"ID of the loader"},"loaderName":{"type":"string","description":"Name of the loader"},"loaderConfig":{"type":"object","description":"Configuration for the loader"},"splitterId":{"type":"string","description":"ID of the text splitter"},"splitterName":{"type":"string","description":"Name of the text splitter"},"splitterConfig":{"type":"object","description":"Configuration for the text splitter"},"totalChunks":{"type":"number","description":"Total number of chunks"},"totalChars":{"type":"number","description":"Total number of characters"},"status":{"type":"string","enum":["EMPTY","SYNC","SYNCING","STALE","NEW","UPSERTING","UPSERTED"],"description":"Status of the document store loader"},"storeId":{"type":"string","description":"ID of the document store"},"files":{"type":"array","items":{"$ref":"#/components/schemas/DocumentStoreLoaderFile"}},"source":{"type":"string","description":"Source of the document store loader"},"credential":{"type":"string","description":"Credential associated with the document store loader"},"rehydrated":{"type":"boolean","description":"Whether the loader has been rehydrated"},"preview":{"type":"boolean","description":"Whether the loader is in preview mode"},"previewChunkCount":{"type":"number","description":"Number of chunks in preview mode"}}},"DocumentStoreLoaderFile":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the file"},"name":{"type":"string","description":"Name of the file"},"mimePrefix":{"type":"string","description":"MIME prefix of the file"},"size":{"type":"number","description":"Size of the file"},"status":{"type":"string","enum":["EMPTY","SYNC","SYNCING","STALE","NEW","UPSERTING","UPSERTED"],"description":"Status of the file"},"uploaded":{"type":"string","format":"date-time","description":"Date and time when the file was uploaded"}}}}},"paths":{"/document-store/chunks/{storeId}/{loaderId}/{pageNo}":{"get":{"tags":["document-store"],"summary":"Get chunks from a specific document loader","description":"Get chunks from a specific document loader within a document store","operationId":"getDocumentStoreFileChunks","parameters":[{"in":"path","name":"storeId","required":true,"schema":{"type":"string","format":"uuid"},"description":"Document Store ID"},{"in":"path","name":"loaderId","required":true,"schema":{"type":"string","format":"uuid"},"description":"Document loader ID"},{"in":"path","name":"pageNo","required":true,"schema":{"type":"string"},"description":"Pagination number"}],"responses":{"200":{"description":"Successfully retrieved chunks from document loader","content":{"application/json":{"schema":{"$ref":"#/components/schemas/DocumentStoreFileChunkPagedResponse"}}}},"404":{"description":"Document store not found"},"500":{"description":"Internal server error"}}}}}}
```

## List all document stores

> Retrieves a list of all document stores

```json
{"tags":[{"name":"document-store"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"DocumentStore":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the document store"},"name":{"type":"string","description":"Name of the document store"},"description":{"type":"string","description":"Description of the document store"},"loaders":{"type":"string","description":"Loaders associated with the document store, stored as JSON string"},"whereUsed":{"type":"string","description":"Places where the document store is used, stored as JSON string"},"status":{"type":"string","enum":["EMPTY","SYNC","SYNCING","STALE","NEW","UPSERTING","UPSERTED"],"description":"Status of the document store"},"vectorStoreConfig":{"type":"string","description":"Configuration for the vector store, stored as JSON string"},"embeddingConfig":{"type":"string","description":"Configuration for the embedding, stored as JSON string"},"recordManagerConfig":{"type":"string","description":"Configuration for the record manager, stored as JSON string"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the document store was created"},"updatedDate":{"type":"string","format":"date-time","description":"Date and time when the document store was last updated"}}}}},"paths":{"/document-store/store":{"get":{"tags":["document-store"],"summary":"List all document stores","description":"Retrieves a list of all document stores","operationId":"getAllDocumentStores","responses":{"200":{"description":"A list of document stores","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/DocumentStore"}}}}},"500":{"description":"Internal server error"}}}}}}
```

## Upsert document to document store

> Upsert document to document store

```json
{"tags":[{"name":"document-store"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"DocumentStoreLoaderForUpsert":{"type":"object","properties":{"docId":{"type":"string","format":"uuid","nullable":true,"description":"Document ID within the store. If provided, existing configuration from the document will be used for the new document"},"metadata":{"type":"object","nullable":true,"description":"Metadata associated with the document"},"replaceExisting":{"type":"boolean","nullable":true,"description":"Whether to replace existing document loader with the new upserted chunks. However this does not delete the existing embeddings in the vector store"},"createNewDocStore":{"type":"boolean","nullable":true,"description":"Whether to create a new document store"},"docStore":{"type":"object","nullable":true,"description":"Only when createNewDocStore is true, pass in the new document store configuration","properties":{"name":{"type":"string","description":"Name of the new document store to be created"},"description":{"type":"string","description":"Description of the new document store to be created"}}},"loader":{"type":"object","nullable":true,"properties":{"name":{"type":"string","description":"Name of the loader (camelCase)"},"config":{"type":"object","description":"Configuration for the loader"}}},"splitter":{"type":"object","nullable":true,"properties":{"name":{"type":"string","description":"Name of the text splitter (camelCase)"},"config":{"type":"object","description":"Configuration for the text splitter"}}},"embedding":{"type":"object","nullable":true,"properties":{"name":{"type":"string","description":"Name of the embedding generator (camelCase)"},"config":{"type":"object","description":"Configuration for the embedding generator"}}},"vectorStore":{"type":"object","nullable":true,"properties":{"name":{"type":"string","description":"Name of the vector store (camelCase)"},"config":{"type":"object","description":"Configuration for the vector store"}}},"recordManager":{"type":"object","nullable":true,"properties":{"name":{"type":"string","description":"Name of the record manager (camelCase)"},"config":{"type":"object","description":"Configuration for the record manager"}}}}},"VectorUpsertResponse":{"type":"object","properties":{"numAdded":{"type":"number","description":"Number of vectors added"},"numDeleted":{"type":"number","description":"Number of vectors deleted"},"numUpdated":{"type":"number","description":"Number of vectors updated"},"numSkipped":{"type":"number","description":"Number of vectors skipped (not added, deleted, or updated)"},"addedDocs":{"type":"array","items":{"$ref":"#/components/schemas/Document"}}}},"Document":{"type":"object","properties":{"pageContent":{"type":"string"},"metadata":{"type":"object","additionalProperties":{"type":"string"}}}}}},"paths":{"/document-store/upsert/{id}":{"post":{"tags":["document-store"],"summary":"Upsert document to document store","description":"Upsert document to document store","operationId":"upsertDocument","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string","format":"uuid"},"description":"Document Store ID"}],"requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/DocumentStoreLoaderForUpsert"}},"multipart/form-data":{"schema":{"type":"object","properties":{"files":{"type":"array","items":{"type":"string","format":"binary"},"description":"Files to be uploaded"},"docId":{"type":"string","nullable":true,"description":"Document ID to use existing configuration"},"loader":{"type":"string","nullable":true,"description":"Loader configurations"},"splitter":{"type":"string","nullable":true,"description":"Splitter configurations"},"embedding":{"type":"string","nullable":true,"description":"Embedding configurations"},"vectorStore":{"type":"string","nullable":true,"description":"Vector Store configurations"},"recordManager":{"type":"string","nullable":true,"description":"Record Manager configurations"},"metadata":{"type":"object","nullable":true,"description":"Metadata associated with the document"},"replaceExisting":{"type":"boolean","nullable":true,"description":"Whether to replace existing document loader with the new upserted chunks. However this does not delete the existing embeddings in the vector store"},"createNewDocStore":{"type":"boolean","nullable":true,"description":"Whether to create a new document store"},"docStore":{"type":"object","nullable":true,"description":"Only when createNewDocStore is true, pass in the new document store configuration","properties":{"name":{"type":"string","description":"Name of the new document store to be created"},"description":{"type":"string","description":"Description of the new document store to be created"}}}},"required":["files"]}}},"required":true},"responses":{"200":{"description":"Successfully execute upsert operation","content":{"application/json":{"schema":{"$ref":"#/components/schemas/VectorUpsertResponse"}}}},"400":{"description":"Invalid request body"},"500":{"description":"Internal server error"}}}}}}
```

## Re-process and upsert all documents in document store

> Re-process and upsert all existing documents in document store

```json
{"tags":[{"name":"document-store"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"DocumentStoreLoaderForRefresh":{"type":"object","properties":{"items":{"type":"array","items":{"$ref":"#/components/schemas/DocumentStoreLoaderForUpsert"}}}},"DocumentStoreLoaderForUpsert":{"type":"object","properties":{"docId":{"type":"string","format":"uuid","nullable":true,"description":"Document ID within the store. If provided, existing configuration from the document will be used for the new document"},"metadata":{"type":"object","nullable":true,"description":"Metadata associated with the document"},"replaceExisting":{"type":"boolean","nullable":true,"description":"Whether to replace existing document loader with the new upserted chunks. However this does not delete the existing embeddings in the vector store"},"createNewDocStore":{"type":"boolean","nullable":true,"description":"Whether to create a new document store"},"docStore":{"type":"object","nullable":true,"description":"Only when createNewDocStore is true, pass in the new document store configuration","properties":{"name":{"type":"string","description":"Name of the new document store to be created"},"description":{"type":"string","description":"Description of the new document store to be created"}}},"loader":{"type":"object","nullable":true,"properties":{"name":{"type":"string","description":"Name of the loader (camelCase)"},"config":{"type":"object","description":"Configuration for the loader"}}},"splitter":{"type":"object","nullable":true,"properties":{"name":{"type":"string","description":"Name of the text splitter (camelCase)"},"config":{"type":"object","description":"Configuration for the text splitter"}}},"embedding":{"type":"object","nullable":true,"properties":{"name":{"type":"string","description":"Name of the embedding generator (camelCase)"},"config":{"type":"object","description":"Configuration for the embedding generator"}}},"vectorStore":{"type":"object","nullable":true,"properties":{"name":{"type":"string","description":"Name of the vector store (camelCase)"},"config":{"type":"object","description":"Configuration for the vector store"}}},"recordManager":{"type":"object","nullable":true,"properties":{"name":{"type":"string","description":"Name of the record manager (camelCase)"},"config":{"type":"object","description":"Configuration for the record manager"}}}}},"VectorUpsertResponse":{"type":"object","properties":{"numAdded":{"type":"number","description":"Number of vectors added"},"numDeleted":{"type":"number","description":"Number of vectors deleted"},"numUpdated":{"type":"number","description":"Number of vectors updated"},"numSkipped":{"type":"number","description":"Number of vectors skipped (not added, deleted, or updated)"},"addedDocs":{"type":"array","items":{"$ref":"#/components/schemas/Document"}}}},"Document":{"type":"object","properties":{"pageContent":{"type":"string"},"metadata":{"type":"object","additionalProperties":{"type":"string"}}}}}},"paths":{"/document-store/refresh/{id}":{"post":{"tags":["document-store"],"summary":"Re-process and upsert all documents in document store","description":"Re-process and upsert all existing documents in document store","operationId":"refreshDocument","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string","format":"uuid"},"description":"Document Store ID"}],"requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/DocumentStoreLoaderForRefresh"}}},"required":true},"responses":{"200":{"description":"Successfully execute refresh operation","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/VectorUpsertResponse"}}}}},"400":{"description":"Invalid request body"},"500":{"description":"Internal server error"}}}}}}
```

## Retrieval query

> Retrieval query for the upserted chunks

```json
{"tags":[{"name":"document-store"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Document":{"type":"object","properties":{"pageContent":{"type":"string"},"metadata":{"type":"object","additionalProperties":{"type":"string"}}}}}},"paths":{"/document-store/vectorstore/query":{"post":{"tags":["document-store"],"summary":"Retrieval query","description":"Retrieval query for the upserted chunks","operationId":"queryVectorStore","requestBody":{"content":{"application/json":{"schema":{"type":"object","required":["storeId","query"],"properties":{"storeId":{"type":"string","description":"Document Store ID"},"query":{"type":"string","description":"Query to search for"}}}}},"required":true},"responses":{"200":{"description":"Successfully executed query on vector store","content":{"application/json":{"schema":{"type":"object","properties":{"timeTaken":{"type":"number","description":"Time taken to execute the query (in milliseconds)"},"docs":{"type":"array","items":{"$ref":"#/components/schemas/Document"}}}}}}},"400":{"description":"Invalid request body"},"500":{"description":"Internal server error"}}}}}}
```

## Create a new document store

> Creates a new document store with the provided details

```json
{"tags":[{"name":"document-store"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"DocumentStore":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the document store"},"name":{"type":"string","description":"Name of the document store"},"description":{"type":"string","description":"Description of the document store"},"loaders":{"type":"string","description":"Loaders associated with the document store, stored as JSON string"},"whereUsed":{"type":"string","description":"Places where the document store is used, stored as JSON string"},"status":{"type":"string","enum":["EMPTY","SYNC","SYNCING","STALE","NEW","UPSERTING","UPSERTED"],"description":"Status of the document store"},"vectorStoreConfig":{"type":"string","description":"Configuration for the vector store, stored as JSON string"},"embeddingConfig":{"type":"string","description":"Configuration for the embedding, stored as JSON string"},"recordManagerConfig":{"type":"string","description":"Configuration for the record manager, stored as JSON string"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the document store was created"},"updatedDate":{"type":"string","format":"date-time","description":"Date and time when the document store was last updated"}}}}},"paths":{"/document-store/store":{"post":{"tags":["document-store"],"summary":"Create a new document store","description":"Creates a new document store with the provided details","operationId":"createDocumentStore","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/DocumentStore"}}},"required":true},"responses":{"200":{"description":"Successfully created document store","content":{"application/json":{"schema":{"$ref":"#/components/schemas/DocumentStore"}}}},"400":{"description":"Invalid request body"},"500":{"description":"Internal server error"}}}}}}
```

## Update a specific chunk

> Updates a specific chunk from a document loader

```json
{"tags":[{"name":"document-store"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Document":{"type":"object","properties":{"pageContent":{"type":"string"},"metadata":{"type":"object","additionalProperties":{"type":"string"}}}},"DocumentStoreFileChunkPagedResponse":{"type":"object","properties":{"chunks":{"type":"array","items":{"$ref":"#/components/schemas/DocumentStoreFileChunk"}},"count":{"type":"number"},"file":{"$ref":"#/components/schemas/DocumentStoreLoaderForPreview"},"currentPage":{"type":"number"},"storeName":{"type":"string"},"description":{"type":"string"}}},"DocumentStoreFileChunk":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the file chunk"},"docId":{"type":"string","format":"uuid","description":"Document ID within the store"},"storeId":{"type":"string","format":"uuid","description":"Document Store ID"},"chunkNo":{"type":"integer","description":"Chunk number within the document"},"pageContent":{"type":"string","description":"Content of the chunk"},"metadata":{"type":"string","description":"Metadata associated with the chunk"}}},"DocumentStoreLoaderForPreview":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the document store loader"},"loaderId":{"type":"string","description":"ID of the loader"},"loaderName":{"type":"string","description":"Name of the loader"},"loaderConfig":{"type":"object","description":"Configuration for the loader"},"splitterId":{"type":"string","description":"ID of the text splitter"},"splitterName":{"type":"string","description":"Name of the text splitter"},"splitterConfig":{"type":"object","description":"Configuration for the text splitter"},"totalChunks":{"type":"number","description":"Total number of chunks"},"totalChars":{"type":"number","description":"Total number of characters"},"status":{"type":"string","enum":["EMPTY","SYNC","SYNCING","STALE","NEW","UPSERTING","UPSERTED"],"description":"Status of the document store loader"},"storeId":{"type":"string","description":"ID of the document store"},"files":{"type":"array","items":{"$ref":"#/components/schemas/DocumentStoreLoaderFile"}},"source":{"type":"string","description":"Source of the document store loader"},"credential":{"type":"string","description":"Credential associated with the document store loader"},"rehydrated":{"type":"boolean","description":"Whether the loader has been rehydrated"},"preview":{"type":"boolean","description":"Whether the loader is in preview mode"},"previewChunkCount":{"type":"number","description":"Number of chunks in preview mode"}}},"DocumentStoreLoaderFile":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the file"},"name":{"type":"string","description":"Name of the file"},"mimePrefix":{"type":"string","description":"MIME prefix of the file"},"size":{"type":"number","description":"Size of the file"},"status":{"type":"string","enum":["EMPTY","SYNC","SYNCING","STALE","NEW","UPSERTING","UPSERTED"],"description":"Status of the file"},"uploaded":{"type":"string","format":"date-time","description":"Date and time when the file was uploaded"}}}}},"paths":{"/document-store/chunks/{storeId}/{loaderId}/{chunkId}":{"put":{"tags":["document-store"],"summary":"Update a specific chunk","description":"Updates a specific chunk from a document loader","operationId":"editDocumentStoreFileChunk","parameters":[{"in":"path","name":"storeId","required":true,"schema":{"type":"string"},"description":"Document Store ID"},{"in":"path","name":"loaderId","required":true,"schema":{"type":"string"},"description":"Document Loader ID"},{"in":"path","name":"chunkId","required":true,"schema":{"type":"string"},"description":"Document Chunk ID"}],"requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Document"}}},"required":true},"responses":{"200":{"description":"Successfully updated chunk","content":{"application/json":{"schema":{"$ref":"#/components/schemas/DocumentStoreFileChunkPagedResponse"}}}},"404":{"description":"Document store not found"},"500":{"description":"Internal server error"}}}}}}
```

## Update a specific document store

> Updates the details of a specific document store by its ID

```json
{"tags":[{"name":"document-store"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"DocumentStore":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the document store"},"name":{"type":"string","description":"Name of the document store"},"description":{"type":"string","description":"Description of the document store"},"loaders":{"type":"string","description":"Loaders associated with the document store, stored as JSON string"},"whereUsed":{"type":"string","description":"Places where the document store is used, stored as JSON string"},"status":{"type":"string","enum":["EMPTY","SYNC","SYNCING","STALE","NEW","UPSERTING","UPSERTED"],"description":"Status of the document store"},"vectorStoreConfig":{"type":"string","description":"Configuration for the vector store, stored as JSON string"},"embeddingConfig":{"type":"string","description":"Configuration for the embedding, stored as JSON string"},"recordManagerConfig":{"type":"string","description":"Configuration for the record manager, stored as JSON string"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the document store was created"},"updatedDate":{"type":"string","format":"date-time","description":"Date and time when the document store was last updated"}}}}},"paths":{"/document-store/store/{id}":{"put":{"tags":["document-store"],"summary":"Update a specific document store","description":"Updates the details of a specific document store by its ID","operationId":"updateDocumentStore","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string","format":"uuid"},"description":"Document Store ID"}],"requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/DocumentStore"}}},"required":true},"responses":{"200":{"description":"Successfully updated document store","content":{"application/json":{"schema":{"$ref":"#/components/schemas/DocumentStore"}}}},"404":{"description":"Document store not found"},"500":{"description":"Internal server error"}}}}}}
```

## Delete a specific document store

> Deletes a document store by its ID

```json
{"tags":[{"name":"document-store"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}}},"paths":{"/document-store/store/{id}":{"delete":{"tags":["document-store"],"summary":"Delete a specific document store","description":"Deletes a document store by its ID","operationId":"deleteDocumentStore","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string","format":"uuid"},"description":"Document Store ID"}],"responses":{"200":{"description":"Successfully deleted document store"},"404":{"description":"Document store not found"},"500":{"description":"Internal server error"}}}}}}
```

## Delete a specific chunk from a document loader

> Delete a specific chunk from a document loader

```json
{"tags":[{"name":"document-store"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}}},"paths":{"/document-store/chunks/{storeId}/{loaderId}/{chunkId}":{"delete":{"tags":["document-store"],"summary":"Delete a specific chunk from a document loader","description":"Delete a specific chunk from a document loader","operationId":"deleteDocumentStoreFileChunk","parameters":[{"in":"path","name":"storeId","required":true,"schema":{"type":"string"},"description":"Document Store ID"},{"in":"path","name":"loaderId","required":true,"schema":{"type":"string"},"description":"Document Loader ID"},{"in":"path","name":"chunkId","required":true,"schema":{"type":"string"},"description":"Document Chunk ID"}],"responses":{"200":{"description":"Successfully deleted chunk"},"400":{"description":"Invalid ID provided"},"404":{"description":"Document Store not found"},"500":{"description":"Internal server error"}}}}}}
```

## Delete specific document loader and associated chunks from document store

> Delete specific document loader and associated chunks from document store. This does not delete data from vector store.

```json
{"tags":[{"name":"document-store"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}}},"paths":{"/document-store/loader/{storeId}/{loaderId}":{"delete":{"tags":["document-store"],"summary":"Delete specific document loader and associated chunks from document store","description":"Delete specific document loader and associated chunks from document store. This does not delete data from vector store.","operationId":"deleteLoaderFromDocumentStore","parameters":[{"in":"path","name":"storeId","required":true,"schema":{"type":"string"},"description":"Document Store ID"},{"in":"path","name":"loaderId","required":true,"schema":{"type":"string"},"description":"Document Loader ID"}],"responses":{"200":{"description":"Successfully deleted loader from document store"},"400":{"description":"Invalid ID provided"},"404":{"description":"Document Store not found"},"500":{"description":"Internal server error"}}}}}}
```

## Delete data from vector store

> Only data that were upserted with Record Manager will be deleted from vector store

```json
{"tags":[{"name":"document-store"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}}},"paths":{"/document-store/vectorstore/{id}":{"delete":{"tags":["document-store"],"summary":"Delete data from vector store","description":"Only data that were upserted with Record Manager will be deleted from vector store","operationId":"deleteVectorStoreFromStore","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Document Store ID"}],"responses":{"200":{"description":"Successfully deleted data from vector store"},"400":{"description":"Invalid ID provided"},"404":{"description":"Document Store not found"},"500":{"description":"Internal server error"}}}}}}
```


# Feedback

## List all chat message feedbacks for a chatflow

> Retrieve all feedbacks for a chatflow

```json
{"tags":[{"name":"feedback"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"ChatMessageFeedback":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the feedback"},"chatflowid":{"type":"string","format":"uuid","description":"Identifier for the chat flow"},"chatId":{"type":"string","description":"Identifier for the chat"},"messageId":{"type":"string","format":"uuid","description":"Identifier for the message"},"rating":{"type":"string","enum":["THUMBS_UP","THUMBS_DOWN"],"description":"Rating for the message"},"content":{"type":"string","description":"Feedback content"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the feedback was created"}}}}},"paths":{"/feedback/{id}":{"get":{"tags":["feedback"],"summary":"List all chat message feedbacks for a chatflow","description":"Retrieve all feedbacks for a chatflow","operationId":"getAllChatMessageFeedback","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Chatflow ID"},{"in":"query","name":"chatId","schema":{"type":"string"},"description":"Chat ID to filter feedbacks (optional)"},{"in":"query","name":"sortOrder","schema":{"type":"string","enum":["asc","desc"],"default":"asc"},"description":"Sort order of feedbacks (optional)"},{"in":"query","name":"startDate","schema":{"type":"string","format":"date-time"},"description":"Filter feedbacks starting from this date (optional)"},{"in":"query","name":"endDate","schema":{"type":"string","format":"date-time"},"description":"Filter feedbacks up to this date (optional)"}],"responses":{"200":{"description":"Successful operation","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/ChatMessageFeedback"}}}}},"500":{"description":"Internal server error"}}}}}}
```

## Create new chat message feedback

> Create new feedback for a specific chat flow.

```json
{"tags":[{"name":"feedback"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"ChatMessageFeedback":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the feedback"},"chatflowid":{"type":"string","format":"uuid","description":"Identifier for the chat flow"},"chatId":{"type":"string","description":"Identifier for the chat"},"messageId":{"type":"string","format":"uuid","description":"Identifier for the message"},"rating":{"type":"string","enum":["THUMBS_UP","THUMBS_DOWN"],"description":"Rating for the message"},"content":{"type":"string","description":"Feedback content"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the feedback was created"}}}}},"paths":{"/feedback":{"post":{"tags":["feedback"],"operationId":"createChatMessageFeedbackForChatflow","summary":"Create new chat message feedback","description":"Create new feedback for a specific chat flow.","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/ChatMessageFeedback"}}},"required":true},"responses":{"200":{"description":"Feedback successfully created","content":{"application/json":{"schema":{"$ref":"#/components/schemas/ChatMessageFeedback"}}}},"400":{"description":"Invalid input provided"},"500":{"description":"Internal server error"}}}}}}
```

## Update chat message feedback

> Update a specific feedback

```json
{"tags":[{"name":"feedback"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"ChatMessageFeedback":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the feedback"},"chatflowid":{"type":"string","format":"uuid","description":"Identifier for the chat flow"},"chatId":{"type":"string","description":"Identifier for the chat"},"messageId":{"type":"string","format":"uuid","description":"Identifier for the message"},"rating":{"type":"string","enum":["THUMBS_UP","THUMBS_DOWN"],"description":"Rating for the message"},"content":{"type":"string","description":"Feedback content"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the feedback was created"}}}}},"paths":{"/feedback/{id}":{"put":{"tags":["feedback"],"summary":"Update chat message feedback","description":"Update a specific feedback","operationId":"updateChatMessageFeedbackForChatflow","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Chat Message Feedback ID"}],"requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/ChatMessageFeedback"}}}},"responses":{"200":{"description":"Feedback successfully updated","content":{"application/json":{"schema":{"$ref":"#/components/schemas/ChatMessageFeedback"}}}},"400":{"description":"Invalid input provided"},"404":{"description":"Feedback with the specified ID was not found"},"500":{"description":"Internal server error"}}}}}}
```


# Leads

## Get all leads for a specific chatflow

> Retrieve all leads associated with a specific chatflow

```json
{"tags":[{"name":"leads"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Lead":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the lead"},"name":{"type":"string","description":"Name of the lead"},"email":{"type":"string","description":"Email address of the lead"},"phone":{"type":"string","description":"Phone number of the lead"},"chatflowid":{"type":"string","description":"ID of the chatflow the lead is associated with"},"chatId":{"type":"string","description":"ID of the chat session the lead is associated with"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the lead was created"}}}}},"paths":{"/leads/{id}":{"get":{"tags":["leads"],"summary":"Get all leads for a specific chatflow","description":"Retrieve all leads associated with a specific chatflow","operationId":"getAllLeadsForChatflow","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Chatflow ID"}],"responses":{"200":{"description":"Successful operation","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/Lead"}}}}},"400":{"description":"Invalid ID provided"},"404":{"description":"Leads not found"},"500":{"description":"Internal server error"}}}}}}
```

## Create a new lead in a chatflow

> Create a new lead associated with a specific chatflow

```json
{"tags":[{"name":"leads"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Lead":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the lead"},"name":{"type":"string","description":"Name of the lead"},"email":{"type":"string","description":"Email address of the lead"},"phone":{"type":"string","description":"Phone number of the lead"},"chatflowid":{"type":"string","description":"ID of the chatflow the lead is associated with"},"chatId":{"type":"string","description":"ID of the chat session the lead is associated with"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the lead was created"}}}}},"paths":{"/leads":{"post":{"tags":["leads"],"operationId":"createLead","summary":"Create a new lead in a chatflow","description":"Create a new lead associated with a specific chatflow","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Lead"}}},"required":true},"responses":{"200":{"description":"Lead created successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Lead"}}}},"400":{"description":"Invalid request body"},"422":{"description":"Validation error"},"500":{"description":"Internal server error"}}}}}}
```


# Ping

## Ping the server

> Ping the server to check if it is running

```json
{"tags":[{"name":"ping"}],"paths":{"/ping":{"get":{"tags":["ping"],"summary":"Ping the server","description":"Ping the server to check if it is running","operationId":"pingServer","responses":{"200":{"description":"Server is running","content":{"text/plain":{"schema":{"type":"string"}}}},"500":{"description":"Internal server error"}}}}}}
```


# Prediction

## Send message to flow and get AI response

> Send a message to your flow and receive an AI-generated response. This is the primary endpoint for interacting with your flows and assistants.\
> \*\*Authentication\*\*: API key may be required depending on flow settings.<br>

```json
{"tags":[{"name":"prediction"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Prediction":{"type":"object","properties":{"question":{"type":"string","description":"The question/message to send to the flow"},"form":{"type":"object","description":"The form object to send to the flow (alternative to question for Agentflow V2)","additionalProperties":true},"streaming":{"type":"boolean","description":"Enable streaming responses for real-time output","default":false},"overrideConfig":{"type":"object","description":"Override flow configuration and pass variables at runtime","additionalProperties":true},"history":{"type":"array","description":"Previous conversation messages for context","items":{"type":"object","properties":{"role":{"type":"string","enum":["apiMessage","userMessage"],"description":"The role of the message"},"content":{"type":"string","description":"The content of the message"}}}},"uploads":{"type":"array","description":"Files to upload (images, audio, documents, etc.)","items":{"type":"object","properties":{"type":{"type":"string","enum":["audio","url","file","file:rag","file:full"],"description":"The type of file upload"},"name":{"type":"string","description":"The name of the file or resource"},"data":{"type":"string","description":"The base64-encoded data or URL for the resource"},"mime":{"type":"string","description":"The MIME type of the file or resource","enum":["image/png","image/jpeg","image/jpg","image/gif","image/webp","audio/mp4","audio/webm","audio/wav","audio/mpeg","audio/ogg","audio/aac"]}}}},"humanInput":{"type":"object","description":"Return human feedback and resume execution from a stopped checkpoint","properties":{"type":{"type":"string","enum":["proceed","reject"],"description":"Type of human input response"},"feedback":{"type":"string","description":"Feedback to the last output"}}}}},"Document":{"type":"object","properties":{"pageContent":{"type":"string"},"metadata":{"type":"object","additionalProperties":{"type":"string"}}}},"UsedTool":{"type":"object","properties":{"tool":{"type":"string"},"toolInput":{"type":"object","additionalProperties":{"type":"string"}},"toolOutput":{"type":"string"}}}}},"paths":{"/prediction/{id}":{"post":{"tags":["prediction"],"operationId":"createPrediction","summary":"Send message to flow and get AI response","description":"Send a message to your flow and receive an AI-generated response. This is the primary endpoint for interacting with your flows and assistants.\n**Authentication**: API key may be required depending on flow settings.\n","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Flow ID - the unique identifier of your flow"}],"requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Prediction"}},"multipart/form-data":{"schema":{"type":"object","properties":{"question":{"type":"string","description":"Question/message to send to the flow"},"files":{"type":"array","items":{"type":"string","format":"binary"},"description":"Files to be uploaded (images, audio, documents, etc.)"},"streaming":{"type":"boolean","description":"Enable streaming responses","default":false},"overrideConfig":{"type":"string","description":"JSON string of configuration overrides"},"history":{"type":"string","description":"JSON string of conversation history"},"humanInput":{"type":"string","description":"JSON string of human input for resuming execution"}},"required":["question"]}}},"required":true},"responses":{"200":{"description":"Successful prediction response","content":{"application/json":{"schema":{"type":"object","properties":{"text":{"type":"string","description":"The AI-generated response text"},"json":{"type":"object","description":"The result in JSON format if available (for structured outputs)","nullable":true},"question":{"type":"string","description":"The original question/message sent to the flow"},"chatId":{"type":"string","description":"Unique identifier for the chat session"},"chatMessageId":{"type":"string","description":"Unique identifier for this specific message"},"sessionId":{"type":"string","description":"Session identifier for conversation continuity","nullable":true},"memoryType":{"type":"string","description":"Type of memory used for conversation context","nullable":true},"sourceDocuments":{"type":"array","description":"Documents retrieved from vector store (if RAG is enabled)","items":{"$ref":"#/components/schemas/Document"},"nullable":true},"usedTools":{"type":"array","description":"Tools that were invoked during the response generation","items":{"$ref":"#/components/schemas/UsedTool"},"nullable":true}}}}}},"400":{"description":"Bad Request - Invalid input provided or request format is incorrect","content":{"application/json":{"schema":{"type":"object","properties":{"error":{"type":"string"}}}}}},"401":{"description":"Unauthorized - API key required or invalid","content":{"application/json":{"schema":{"type":"object","properties":{"error":{"type":"string"}}}}}},"404":{"description":"Not Found - Chatflow with specified ID does not exist","content":{"application/json":{"schema":{"type":"object","properties":{"error":{"type":"string"}}}}}},"413":{"description":"Payload Too Large - Request payload exceeds size limits","content":{"application/json":{"schema":{"type":"object","properties":{"error":{"type":"string"}}}}}},"422":{"description":"Validation Error - Request validation failed","content":{"application/json":{"schema":{"type":"object","properties":{"error":{"type":"string"}}}}}},"500":{"description":"Internal Server Error - Flow configuration or execution error","content":{"application/json":{"schema":{"type":"object","properties":{"error":{"type":"string"}}}}}}}}}}}
```


# Tools

## Create a new tool

> Create a new tool

```json
{"tags":[{"name":"tools"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Tool":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the tool"},"name":{"type":"string","description":"Name of the tool"},"description":{"type":"string","description":"Description of the tool"},"color":{"type":"string","description":"Color associated with the tool"},"iconSrc":{"type":"string","nullable":true,"description":"Source URL for the tool's icon"},"schema":{"type":"string","nullable":true,"description":"JSON schema associated with the tool"},"func":{"type":"string","nullable":true,"description":"Functionality description or code associated with the tool"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the tool was created"},"updatedDate":{"type":"string","format":"date-time","description":"Date and time when the tool was last updated"}}}}},"paths":{"/tools":{"post":{"tags":["tools"],"operationId":"createTool","summary":"Create a new tool","description":"Create a new tool","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Tool"}}},"required":true},"responses":{"200":{"description":"Tool created successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Tool"}}}},"400":{"description":"Invalid request body"},"422":{"description":"Validation error"},"500":{"description":"Internal server error"}}}}}}
```

## List all tools

> Retrieve a list of all tools

```json
{"tags":[{"name":"tools"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Tool":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the tool"},"name":{"type":"string","description":"Name of the tool"},"description":{"type":"string","description":"Description of the tool"},"color":{"type":"string","description":"Color associated with the tool"},"iconSrc":{"type":"string","nullable":true,"description":"Source URL for the tool's icon"},"schema":{"type":"string","nullable":true,"description":"JSON schema associated with the tool"},"func":{"type":"string","nullable":true,"description":"Functionality description or code associated with the tool"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the tool was created"},"updatedDate":{"type":"string","format":"date-time","description":"Date and time when the tool was last updated"}}}}},"paths":{"/tools":{"get":{"tags":["tools"],"summary":"List all tools","description":"Retrieve a list of all tools","operationId":"getAllTools","responses":{"200":{"description":"Successful operation","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/Tool"}}}}},"500":{"description":"Internal server error"}}}}}}
```

## Get a tool by ID

> Retrieve a specific tool by ID

```json
{"tags":[{"name":"tools"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Tool":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the tool"},"name":{"type":"string","description":"Name of the tool"},"description":{"type":"string","description":"Description of the tool"},"color":{"type":"string","description":"Color associated with the tool"},"iconSrc":{"type":"string","nullable":true,"description":"Source URL for the tool's icon"},"schema":{"type":"string","nullable":true,"description":"JSON schema associated with the tool"},"func":{"type":"string","nullable":true,"description":"Functionality description or code associated with the tool"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the tool was created"},"updatedDate":{"type":"string","format":"date-time","description":"Date and time when the tool was last updated"}}}}},"paths":{"/tools/{id}":{"get":{"tags":["tools"],"summary":"Get a tool by ID","description":"Retrieve a specific tool by ID","operationId":"getToolById","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Tool ID"}],"responses":{"200":{"description":"Successful operation","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Tool"}}}},"400":{"description":"Invalid ID provided"},"404":{"description":"Tool not found"},"500":{"description":"Internal server error"}}}}}}
```

## Update a tool by ID

> Update a specific tool by ID

```json
{"tags":[{"name":"tools"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Tool":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the tool"},"name":{"type":"string","description":"Name of the tool"},"description":{"type":"string","description":"Description of the tool"},"color":{"type":"string","description":"Color associated with the tool"},"iconSrc":{"type":"string","nullable":true,"description":"Source URL for the tool's icon"},"schema":{"type":"string","nullable":true,"description":"JSON schema associated with the tool"},"func":{"type":"string","nullable":true,"description":"Functionality description or code associated with the tool"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the tool was created"},"updatedDate":{"type":"string","format":"date-time","description":"Date and time when the tool was last updated"}}}}},"paths":{"/tools/{id}":{"put":{"tags":["tools"],"summary":"Update a tool by ID","description":"Update a specific tool by ID","operationId":"updateTool","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Tool ID"}],"requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Tool"}}},"required":true},"responses":{"200":{"description":"Tool updated successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Tool"}}}},"400":{"description":"Invalid ID or request body provided"},"404":{"description":"Tool not found"},"500":{"description":"Internal server error"}}}}}}
```

## Delete a tool by ID

> Delete a specific tool by ID

```json
{"tags":[{"name":"tools"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}}},"paths":{"/tools/{id}":{"delete":{"tags":["tools"],"summary":"Delete a tool by ID","description":"Delete a specific tool by ID","operationId":"deleteTool","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Tool ID"}],"responses":{"200":{"description":"Tool deleted successfully"},"400":{"description":"Invalid ID provided"},"404":{"description":"Tool not found"},"500":{"description":"Internal server error"}}}}}}
```


# Upsert History

## Get all upsert history records

> Retrieve all upsert history records with optional filters

```json
{"tags":[{"name":"upsert-history"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"UpsertHistoryResponse":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the upsert history record"},"chatflowid":{"type":"string","description":"ID of the chatflow associated with the upsert history"},"result":{"type":"string","description":"Result of the upsert operation, stored as a JSON string"},"flowData":{"type":"string","description":"Flow data associated with the upsert operation, stored as a JSON string"},"date":{"type":"string","format":"date-time","description":"Date and time when the upsert operation was performed"}}}}},"paths":{"/upsert-history/{id}":{"get":{"tags":["upsert-history"],"summary":"Get all upsert history records","description":"Retrieve all upsert history records with optional filters","operationId":"getAllUpsertHistory","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Chatflow ID to filter records by"},{"in":"query","name":"order","required":false,"schema":{"type":"string","enum":["ASC","DESC"],"default":"ASC"},"description":"Sort order of the results (ascending or descending)"},{"in":"query","name":"startDate","required":false,"schema":{"type":"string","format":"date-time"},"description":"Filter records from this start date (inclusive)"},{"in":"query","name":"endDate","required":false,"schema":{"type":"string","format":"date-time"},"description":"Filter records until this end date (inclusive)"}],"responses":{"200":{"description":"Successful operation","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/UpsertHistoryResponse"}}}}},"500":{"description":"Internal server error"}}}}}}
```

## Delete upsert history records

> Soft delete upsert history records by IDs

```json
{"tags":[{"name":"upsert-history"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}}},"paths":{"/upsert-history/{id}":{"patch":{"tags":["upsert-history"],"summary":"Delete upsert history records","description":"Soft delete upsert history records by IDs","operationId":"patchDeleteUpsertHistory","requestBody":{"content":{"application/json":{"schema":{"type":"object","properties":{"ids":{"type":"array","items":{"type":"string","format":"uuid"},"description":"List of upsert history record IDs to delete"}}}}}},"responses":{"200":{"description":"Successfully deleted records"},"400":{"description":"Invalid request body"},"500":{"description":"Internal server error"}}}}}}
```


# Variables

## Create a new variable

> Create a new variable

```json
{"tags":[{"name":"variables"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Variable":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the variable"},"name":{"type":"string","description":"Name of the variable"},"value":{"type":"string","description":"Value of the variable","nullable":true},"type":{"type":"string","description":"Type of the variable (e.g., string, number)"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the variable was created"},"updatedDate":{"type":"string","format":"date-time","description":"Date and time when the variable was last updated"}}}}},"paths":{"/variables":{"post":{"tags":["variables"],"operationId":"createVariable","summary":"Create a new variable","description":"Create a new variable","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Variable"}}},"required":true},"responses":{"200":{"description":"Variable created successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Variable"}}}},"400":{"description":"Invalid request body"},"422":{"description":"Validation error"},"500":{"description":"Internal server error"}}}}}}
```

## List all variables

> Retrieve a list of all variables

```json
{"tags":[{"name":"variables"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Variable":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the variable"},"name":{"type":"string","description":"Name of the variable"},"value":{"type":"string","description":"Value of the variable","nullable":true},"type":{"type":"string","description":"Type of the variable (e.g., string, number)"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the variable was created"},"updatedDate":{"type":"string","format":"date-time","description":"Date and time when the variable was last updated"}}}}},"paths":{"/variables":{"get":{"tags":["variables"],"summary":"List all variables","description":"Retrieve a list of all variables","operationId":"getAllVariables","responses":{"200":{"description":"Successful operation","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/Variable"}}}}},"500":{"description":"Internal server error"}}}}}}
```

## Update a variable by ID

> Update a specific variable by ID

```json
{"tags":[{"name":"variables"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"Variable":{"type":"object","properties":{"id":{"type":"string","format":"uuid","description":"Unique identifier for the variable"},"name":{"type":"string","description":"Name of the variable"},"value":{"type":"string","description":"Value of the variable","nullable":true},"type":{"type":"string","description":"Type of the variable (e.g., string, number)"},"createdDate":{"type":"string","format":"date-time","description":"Date and time when the variable was created"},"updatedDate":{"type":"string","format":"date-time","description":"Date and time when the variable was last updated"}}}}},"paths":{"/variables/{id}":{"put":{"tags":["variables"],"summary":"Update a variable by ID","description":"Update a specific variable by ID","operationId":"updateVariable","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Variable ID"}],"requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Variable"}}},"required":true},"responses":{"200":{"description":"Variable updated successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Variable"}}}},"400":{"description":"Invalid ID or request body provided"},"404":{"description":"Variable not found"},"500":{"description":"Internal server error"}}}}}}
```

## Delete a variable by ID

> Delete a specific variable by ID

```json
{"tags":[{"name":"variables"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}}},"paths":{"/variables/{id}":{"delete":{"tags":["variables"],"summary":"Delete a variable by ID","description":"Delete a specific variable by ID","operationId":"deleteVariable","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Variable ID"}],"responses":{"200":{"description":"Variable deleted successfully"},"400":{"description":"Invalid ID provided"},"404":{"description":"Variable not found"},"500":{"description":"Internal server error"}}}}}}
```


# Vector Upsert

## Upsert vector embeddings

> Upsert vector embeddings of documents in a chatflow

```json
{"tags":[{"name":"vector"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer","bearerFormat":"JWT"}},"schemas":{"VectorUpsertResponse":{"type":"object","properties":{"numAdded":{"type":"number","description":"Number of vectors added"},"numDeleted":{"type":"number","description":"Number of vectors deleted"},"numUpdated":{"type":"number","description":"Number of vectors updated"},"numSkipped":{"type":"number","description":"Number of vectors skipped (not added, deleted, or updated)"},"addedDocs":{"type":"array","items":{"$ref":"#/components/schemas/Document"}}}},"Document":{"type":"object","properties":{"pageContent":{"type":"string"},"metadata":{"type":"object","additionalProperties":{"type":"string"}}}}}},"paths":{"/vector/upsert/{id}":{"post":{"tags":["vector"],"operationId":"vectorUpsert","summary":"Upsert vector embeddings","description":"Upsert vector embeddings of documents in a chatflow","parameters":[{"in":"path","name":"id","required":true,"schema":{"type":"string"},"description":"Chatflow ID"}],"requestBody":{"content":{"application/json":{"schema":{"type":"object","properties":{"stopNodeId":{"type":"string","description":"In cases when you have multiple vector store nodes, you can specify the node ID to store the vectors"},"overrideConfig":{"type":"object","description":"The configuration to override the default vector upsert settings (optional)"}}}},"multipart/form-data":{"schema":{"type":"object","properties":{"files":{"type":"array","items":{"type":"string","format":"binary"},"description":"Files to be uploaded"},"modelName":{"type":"string","nullable":true,"description":"Other override configurations"}},"required":["files"]}}},"required":true},"responses":{"200":{"description":"Vector embeddings upserted successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/VectorUpsertResponse"}}}},"400":{"description":"Invalid input provided"},"404":{"description":"Chatflow not found"},"422":{"description":"Validation error"},"500":{"description":"Internal server error"}}}}}}
```


# CLI Reference

Using Flowise CLI, you can programmatically execute many of the same tasks as you can in the GUI. This section introduces Flowise Command Line Interface.

* [User](/cli-reference/user)


# User

## List User Emails

This command allows you to list all user emails registered in the system.

### Local Usage

```bash
pnpm user
```

Or if using npm

```bash
npx flowise user
```

### Docker Usage

If you're running Flowise in a Docker container, use the following command:

```bash
docker exec -it FLOWISE_CONTAINER_NAME pnpm user
```

Replace `FLOWISE_CONTAINER_NAME` with your actual Flowise container name.

## Reset User Password

This command allows you to reset a user's password.

### Local Usage

```bash
pnpm user --email "admin@admin.com" --password "myPassword1!"
```

Or if using npm

```
npx flowise user --email "admin@admin.com" --password "myPassword1!"
```

### Docker Usage

If you're running Flowise in a Docker container, use the following command:

```bash
docker exec -it FLOWISE_CONTAINER_NAME pnpm user --email "admin@admin.com" --password "myPassword1!"
```

Replace `FLOWISE_CONTAINER_NAME` with your actual Flowise container name.

### Parameters

* `--email`: The email address of the user whose password you want to reset
* `--password`: The new password to set for the user


# Using Flowise

Learn about some core functionalities built into Flowise

***

This section provides in-depth guides on core Flowise functionalities.

## Guides

* [Agentflow V2](/using-flowise/agentflowv2)
* [Agentflow V1 (Deprecating)](/using-flowise/agentflowv1)
  * [Multi-Agents](/using-flowise/agentflowv1/multi-agents)
  * [Sequential Agents](/using-flowise/agentflowv1/sequential-agents)
* [Prediction](/using-flowise/prediction)
* [Streaming](/using-flowise/streaming)
* [Document Stores](/using-flowise/document-stores)
* [Upsertion](/using-flowise/upsertion)
* [Analytic](https://github.com/FlowiseAI/FlowiseDocs/blob/main/en/using-flowise/broken-reference/README.md)
* [Monitoring](/using-flowise/monitoring)
* [Embed](/using-flowise/embed)
* [Uploads](/using-flowise/uploads)
* [Variables](/using-flowise/variables)
* [Workspaces](/using-flowise/workspaces)
* [Evaluations](/using-flowise/evaluations)


# Agentflow V2

Learn how to build multi-agents system using Agentflow V2, written by @toi500

This guide explores the AgentFlow V2 architecture, detailing its core concepts, use cases, Flow State, and comprehensive node references.

{% hint style="warning" %}
**Disclaimer:** This documentation describes AgentFlow V2 as of its current official release. Features, functionalities, and node parameters are subject to change in future updates and versions of Flowise. Please refer to the latest official release notes or in-app information for the most up-to-date details.
{% endhint %}

{% embed url="<https://youtu.be/-h4WQuzRHhI?si=jKkhueFIw06aO6Ge>" %}

## Core Concept

AgentFlow V2 represents a significant architectural evolution, introducing a new paradigm in Flowise that focuses on explicit workflow orchestration and enhanced flexibility. Unlike V1's primary reliance on external frameworks for its core agent graph logic, V2 shifts the focus towards designing the entire workflow using a granular set of specialized, standalone nodes developed natively as core Flowise components.

In this V2 architecture, each node functions as an independent unit, executing a discrete operation based on its specific design and configuration. The visual connections between nodes on the canvas explicitly define the workflow's path and control sequence, data can be passed between nodes by referencing the outputs of any previously executed node in the current flow, and the Flow State provides an explicit mechanism for managing and sharing data throughout the workflow.

V2 architecture implements a comprehensive node-dependency and execution queue system that precisely respects these defined pathways while maintaining clear separation between components, allowing workflows to become both more sophisticated and easier to design. This allow complex patterns like loops, conditional branching, human-in-the-loop interactions and others to be achievable. This makes it more adaptable to diverse use cases while remaining more maintainable and extensible.

<div data-full-width="false"><figure><img src="/files/JwfRi4bTLn9upCZagVty" alt=""><figcaption></figcaption></figure></div>

## Difference between Agentflow and Automation Platform

One of the most asked question: What is the difference between Agentflow and automation platforms like n8n, Make, or Zapier?

### 💬 **Agent-to-agent Communication**

Multimodal communication between agents is supported. A Supervisor agent can formulate and delegate tasks to multiple Worker agents, with outputs from the Worker agents subsequently returned to the Supervisor.

At each step, agents have access to the complete conversation history, enabling the Supervisor to determine the next task and the Worker agents to interpret the task, select appropriate tools, and execute actions accordingly.

This architecture enables **collaboration, delegation, and shared task management** across multiple agents, such capabilities are not typically offered by traditional automation tools.

<figure><picture><source srcset="/files/M17Jg3cSnpZjOe3D2hnU" media="(prefers-color-scheme: dark)"><img src="/files/XC8ZAZOJvz7XGvRo2sTO" alt=""></picture><figcaption></figcaption></figure>

### 🙋‍♂ Human-in-the-loop

Execution is paused while awaiting human input, without blocking the running thread. Each checkpoint is saved, allowing the workflow to resume from the same point even after an application restart.

The use of checkpoints enables **long-running, stateful agents**.

Agents can also be configured to **request permission before executing tools**, similar to how Claude asks for user approval before using MCP tools. This helps prevent the autonomous execution of sensitive actions without explicit user approval.

<figure><picture><source srcset="/files/X6R61VuTAKgTN7b76uiY" media="(prefers-color-scheme: dark)"><img src="/files/cA9nDGeagpdHbUVXFFG8" alt=""></picture><figcaption></figcaption></figure>

### 📖 Shared State

Shared state enables data exchange between agents, especially useful for passing data across branches or non-adjacent steps in a flow. Refer to [#understanding-flow-state](#understanding-flow-state "mention")

### ⚡ Streaming

Supports Server-Sent Events (SSE) for real-time streaming of LLM or agent responses. Streaming also enables subscription to execution updates as the workflow progresses.

<figure><img src="/files/nhkeiuz3xegeBlyRyFxa" alt=""><figcaption></figcaption></figure>

### 🌐 MCP Tools

While traditional automation platforms often feature extensive libraries of pre-built integrations, Agentflow allows MCP ([Model Context Protocol](https://github.com/modelcontextprotocol)) tools to be connected as part of the workflow, rather than functioning solely as agent tools.

Custom MCPs can also be created independently, without depending on platform-provided integrations. MCP is widely considered an industry standard and is typically supported and maintained by the official providers. For example, the GitHub MCP is developed and maintained by the GitHub team, with similar support provided for Atlassian Jira, Brave Search, and others.

<figure><picture><source srcset="/files/huhxGIHpjm7Tyq3C8G45" media="(prefers-color-scheme: dark)"><img src="/files/mG2Z1WgL6JICbAsAWkhC" alt=""></picture><figcaption></figcaption></figure>

## Agentflow V2 Node Reference

This section provides a detailed reference for each available node, outlining its specific purpose, key configuration parameters, expected inputs, generated outputs, and its role within the AgentFlow V2 architecture.

<figure><picture><source srcset="/files/6fahkmZYYx3MdUhkXieQ" media="(prefers-color-scheme: dark)"><img src="/files/V2jrxcNZOh8DzhINLUS2" alt=""></picture><figcaption></figcaption></figure>

***

### **1. Start Node**

The designated entry point for initiating any AgentFlow V2 workflow execution. Every flow must begin with this node.

* **Functionality:** Defines how the workflow is triggered and sets up the initial conditions. It can accept input either directly from the chat interface or through a customizable form presented to the user. It also allows for the initialization of `Flow State` variables at the beginning of the execution and can manage how conversation memory is handled for the run.
* **Configuration Parameters**
  * **Input Type**: Determines how the workflow execution is initiated, either by `Chat Input` from the user or via a submitted `Form Input`.
    * **Form Title, Form Description, Form Input Types**: If `Form Input` is selected, these fields configure the appearance of the form presented to the user, allowing for various input field types with defined labels and variable names.
  * **Ephemeral Memory**: If enabled, instructs the workflow to begin the execution without considering any past messages from the conversation thread, effectively starting with a clean memory slate.
  * **Flow State**: Defines the complete set of initial key-value pairs for the workflow's runtime state `$flow.state`. All state keys that will be used or updated by subsequent nodes must be declared and initialized here.
* **Inputs:** Receives the initial data that triggers the workflow, which will be either a chat message or the data submitted through a form.
* **Outputs:** Provides a single output anchor to connect to the first operational node, passing along the initial input data and the initialized Flow State.

<figure><picture><source srcset="/files/8Y0PAQNj2TnNxgmuSv79" media="(prefers-color-scheme: dark)"><img src="/files/yeeMqJlmIUtHM8zYvR30" alt="" width="343"></picture><figcaption></figcaption></figure>

***

### **2. LLM Node**

Provides direct access to a configured Large Language Model (LLM) for executing AI tasks, enabling the workflow to perform structured data extraction if needed.

* **Functionality:** This node sends requests to an LLM based on provided instructions (Messages) and context. It can be used for text generation, summarization, translation, analysis, answering questions, and generating structured JSON output according to a defined schema. It has access to memory for the conversation thread and can read/write to the `Flow State`.
* **Configuration Parameters**
  * **Model**: Specifies the AI model from a chosen service — e.g., OpenAI's GPT-4o or Google Gemini.
  * **Messages**: Define the conversational input for the LLM, structuring it as a sequence of roles — System, User, Assistant, Developer — to guide the AI's response. Dynamic data can be inserted using `{{ variable }}`.
  * **Memory**: If enabled, determines if the LLM should consider the history of the current conversation thread when generating its response.
    * **Memory Type, Window Size, Max Token Limit**: If memory is used, these settings refine how the conversation history is managed and presented to the LLM — for example, whether to include all messages, only a recent window of turns, or a summarized version.
    * **Input Message**: Specifies the variable or text that will be appended as the most recent user message at the end of the existing conversation context — including initial context and memory — before being processed by the LLM/Agent.
  * **Return Response As**: Configures how the LLM's output is categorized — as a `User Message` or `Assistant Message` — which can influence how it's handled by subsequent memory systems or logging.
  * **JSON Structured Output**: Instructs the LLM to format its output according to a specific JSON schema — including keys, data types, and descriptions — ensuring predictable, machine-readable data.
  * **Update Flow State**: Allows the node to modify the workflow's runtime state `$flow.state` during execution by updating pre-defined keys. This makes it possible, for example, to store this LLM node's output under such a key, making it accessible to subsequent nodes.
* **Inputs:** This node utilizes data from the workflow's initial trigger or from the outputs of preceding nodes, incorporating this data into the `Messages` or `Input Message` fields. It can also retrieve values from `$flow.state` when input variables reference it.
* **Outputs:** Produces the LLM's response, which will be either plain text or a structured JSON object. The categorization of this output — as User or Assistant — is determined by the `Return Response` setting.

<figure><picture><source srcset="/files/2wOALvNmXkhOMoPqe5Lq" media="(prefers-color-scheme: dark)"><img src="/files/o7ArwuH69M4Y9of99tfH" alt="" width="375"></picture><figcaption></figcaption></figure>

***

### **3. Agent Node**

Represents an autonomous AI entity capable of reasoning, planning, and interacting with tools or knowledge sources to accomplish a given objective.

* **Functionality:** This node uses an LLM to dynamically decide a sequence of actions. Based on the user's goal — provided via messages/input — it can choose to use available Tools or query Document Stores to gather information or perform actions. It manages its own reasoning cycle and can utilize memory for the conversation thread and `Flow State`. Suitable for tasks requiring multi-step reasoning or interacting dynamically with external systems or tools.
* **Configuration Parameters**
  * **Model**: Specifies the AI model from a chosen service — e.g., OpenAI's GPT-4o or Google Gemini — that will drive the agent's reasoning and decision-making processes.
  * **Messages**: Define the initial conversational input, objective, or context for the agent, structuring it as a sequence of roles — System, User, Assistant, Developer — to guide the agent's understanding and subsequent actions. Dynamic data can be inserted using `{{ variable }}`.
  * **Tools**: Specify which pre-defined Flowise Tools the agent is authorized to use to achieve its goals.
    * For each selected tool, an optional **Require Human Input flag** indicates if the tool's operation might itself pause to ask for human intervention.
  * **Knowledge / Document Stores**: Configure access to information within Flowise-managed Document Stores.
    * **Document Store**: Choose a pre-configured Document Store from which the agent can retrieve information. These stores must be set up and populated in advance.
    * **Describe Knowledge**: Provide a natural language description of the content and purpose of this Document Store. This description guides the agent in understanding what kind of information the store contains and when it would be appropriate to query it.
  * **Knowledge / Vector Embeddings**: Configure access to external, pre-existing vector stores as additional knowledge sources for the agent.
    * **Vector Store**: Selects the specific, pre-configured vector database the agent can query.
    * **Embedding Model**: Specifies the embedding model associated with the selected vector store, ensuring compatibility for queries.
    * **Knowledge Name**: Assigns a short, descriptive name to this vector-based knowledge source, which the agent can use for reference.
    * **Describe Knowledge**: Provide a natural language description of the content and purpose of this vector store, guiding the agent on when and how to utilize this specific knowledge source.
    * **Return Source Documents**: If enabled, instructs the agent to include source document information with the data retrieved from the vector store.
  * **Memory**: If enabled, determines if the agent should consider the history of the current conversation thread when making decisions and generating responses.
    * **Memory Type, Window Size, Max Token Limit**: If memory is used, these settings refine how the conversation history is managed and presented to the agent — for example, whether to include all messages, only a recent window of turns, or a summarized version.
    * **Input Message**: Specifies the variable or text that will be appended as the most recent user message at the end of the existing conversation context — including initial context and memory — before being processed by the LLM/Agent.
  * **Return Response**: Configures how the agent's final output or message is categorized — as a User Message or Assistant Message — which can influence how it's handled by subsequent memory systems or logging.
  * **Update Flow State**: Allows the node to modify the workflow's runtime state `$flow.state` during execution by updating pre-defined keys. This makes it possible, for example, to store this Agent node's output under such a key, making it accessible to subsequent nodes.
* **Inputs:** This node utilizes data from the workflow's initial trigger or from the outputs of preceding nodes, often incorporated into the `Messages` or `Input Message` fields. It accesses the configured tools and knowledge sources as needed.
* **Outputs:** Produces the final result or response generated by the agent after it has completed its reasoning, planning, and any interactions with tools or knowledge sources.

<figure><picture><source srcset="/files/Br3jTIa9g6MTGccBzHp4" media="(prefers-color-scheme: dark)"><img src="/files/pLfSVjeFjyg2Jf5fWWEd" alt="" width="375"></picture><figcaption></figcaption></figure>

***

### **4. Tool Node**

Provides a mechanism for directly and deterministically executing a specific, pre-defined Flowise Tool within the workflow sequence. Unlike the Agent node, where the LLM dynamically chooses a tool based on reasoning, the Tool node executes exactly the tool selected by the workflow designer during configuration.

* **Functionality:** This node is used when the workflow requires the execution of a known, specific capability at a defined point, with readily available inputs. It ensures deterministic action without involving LLM reasoning for tool selection.
* **How it Works**
  1. **Triggering:** When the workflow execution reaches a Tool node, it activates.
  2. **Tool Identification:** It identifies the specific Flowise Tool selected in its configuration.
  3. **Input Argument Resolution:** It looks at the Tool Input Arguments configuration. For each required input parameter of the selected tool.
  4. **Execution:** It invokes the underlying code or API call associated with the selected Flowise Tool, passing the resolved input arguments.
  5. **Output Generation:** It receives the result returned by the tool's execution.
  6. **Output Propagation:** It makes this result available via its output anchor for subsequent nodes to use.
* **Configuration Parameters**
  * **Tool Selection**: Choose the specific, registered Flowise Tool that this node will execute from a dropdown list.
  * **Input Arguments**: Define how data from your workflow is supplied to the selected tool. This section dynamically adapts based on the chosen tool, presenting its specific required input parameters:
    * **Map Argument Name**: For each input the selected tool requires (e.g., `input` for a Calculator), this field will show the expected parameter name as defined by the tool itself.
    * **Provide Argument Value**: Set the value for that corresponding parameter, using a dynamic variable like `{{ previousNode.output }}`, `{{ $flow.state.someKey }}`, or by entering static text.
  * **Update Flow State**: Allows the node to modify the workflow's runtime state `$flow.state` during execution by updating pre-defined keys. This makes it possible, for example, to store this Tool node's output under such a key, making it accessible to subsequent nodes.
* **Inputs:** Receives necessary data for the tool's arguments via the `Input Arguments` mapping, sourcing values from previous node outputs, `$flow.state`, or static configurations.
* **Outputs:** Produces the raw output generated by the executed tool — e.g., a JSON string from an API, a text result, or a numerical value.

<figure><picture><source srcset="/files/miicCpJE0t9Wc9ledVQ3" media="(prefers-color-scheme: dark)"><img src="/files/H84TwTj6p9qgyOUJftwV" alt="" width="375"></picture><figcaption></figcaption></figure>

***

### **5. Retriever Node**

Performs targeted information retrieval from configured Document Stores.

* **Functionality:** This node queries one or more specified Document Stores, fetching relevant document chunks based on semantic similarity. It's a focused alternative to using an Agent node when the only required action is retrieval and dynamic tool selection by an LLM is not necessary.
* **Configuration Parameters**
  * **Knowledge / Document Stores**: Specify which pre-configured and populated Document Store(s) this node should query to find relevant information.
  * **Retriever Query**: Define the text query that will be used to search the selected Document Stores. Dynamic data can be inserted using `{{ variables }}`.
  * **Output Format**: Choose how the retrieved information should be presented — either as plain `Text` or as `Text with Metadata`, which might include details like source document names or locations.
  * **Update Flow State**: Allows the node to modify the workflow's runtime state `$flow.state` during execution by updating pre-defined keys. This makes it possible, for example, to store this Retriever node's output under such a key, making it accessible to subsequent nodes.
* **Inputs:** Requires a query string — often supplied as a variable from a previous step or user input — and accesses the selected Document Stores for information.
* **Outputs:** Produces the document chunks retrieved from the knowledge base, formatted according to the chosen `Output Format`.

<figure><picture><source srcset="/files/RkTqVU8sRsu9shfmmDbP" media="(prefers-color-scheme: dark)"><img src="/files/kaLMaDUIAVINnBXvClXL" alt="" width="375"></picture><figcaption></figcaption></figure>

***

### 6. HTTP Node

Facilitates direct communication with external web services and APIs via the Hypertext Transfer Protocol (HTTP).

* **Functionality:** This node enables the workflow to interact with any external system accessible via HTTP. It can send various types of requests (GET, POST, PUT, DELETE, PATCH) to a specified URL, allowing for integration with third-party APIs, fetching data from web resources, or triggering external webhooks. The node supports configuration of authentication methods, custom headers, query parameters, and different request body types to accommodate diverse API requirements.
* **Configuration Parameters**
  * **HTTP Credential**: Optionally select pre-configured credentials — such as Basic Auth, Bearer Token, or API Key — to authenticate requests to the target service.
  * **Request Method**: Specify the HTTP method to be used for the request — e.g., `GET`, `POST`, `PUT`, `DELETE`, `PATCH`.
  * **Target URL**: Define the complete URL of the external endpoint to which the request will be sent.
  * **Request Headers**: Set any necessary HTTP headers as key-value pairs to be included in the request.
  * **URL Query Parameters**: Define key-value pairs that will be appended to the URL as query parameters.
  * **Request Body Type**: Choose the format of the request payload if sending data — options include `JSON`, `Raw text`, `Form Data`, or `x-www-form-urlencoded`.
  * **Request Body**: Provide the actual data payload for methods like POST or PUT. The format should match the selected `Body Type`, and dynamic data can be inserted using `{{ variables }}`.
  * **Response Type**: Specify how the workflow should interpret the response received from the server — options include `JSON`, `Text`, `Array Buffer`, or `Base64` for binary data.
* **Inputs:** Receives configuration data such as the URL, method, headers, and body, often incorporating dynamic values from previous workflow steps or `$flow.state`.
* **Outputs:** Produces the response received from the external server, parsed according to the selected `Response Type`.

<figure><picture><source srcset="/files/OR8jHsLE8StOJeEwPRTA" media="(prefers-color-scheme: dark)"><img src="/files/mybWMhM9HUdK2mAq0o9T" alt="" width="375"></picture><figcaption></figcaption></figure>

***

### **7. Condition Node**

Implements deterministic branching logic within the workflow based on defined rules.

* **Functionality:** This node acts as a decision point, evaluating one or more specified conditions to direct the workflow down different paths. It compares input values — which can be strings, numbers, or booleans — using a variety of logical operators, such as equals, contains, greater than, or is empty. Based on whether these conditions evaluate to true or false, the workflow execution proceeds along one of the distinct output branches connected to this node.
* **Configuration Parameters**
  * **Conditions**: Configure the set of logical rules the node will evaluate.
    * **Type**: Specify the type of data being compared for this rule — `String`, `Number`, or `Boolean`.
    * **Value 1**: Define the first value for the comparison. Dynamic data can be inserted using `{{ variables }}`.
    * **Operation**: Select the logical operator to apply between Value 1 and Value 2 — e.g., `equal`, `notEqual`, `contains`, `larger`, `isEmpty`.
    * **Value 2**: Define the second value for the comparison, if required by the chosen operation. Dynamic data can also be inserted here using `{{ variables }}`.
* **Inputs:** Requires the data for `Value 1` and `Value 2` for each condition being evaluated. These values are supplied from previous node outputs or retrieved from `$flow.state`.
* **Outputs:** Provides multiple output anchors, corresponding to the boolean outcome (true/false) of the evaluated conditions. The workflow continues along the specific path connected to the output anchor that matches the result.

<figure><picture><source srcset="/files/Uzl7tIxkaErps0JFVzzu" media="(prefers-color-scheme: dark)"><img src="/files/ofN02rwDv6GPNzHcxxyi" alt="" width="375"></picture><figcaption></figcaption></figure>

***

### **8. Condition Agent Node**

Provides AI-driven dynamic branching based on natural language instructions and context.

* **Functionality:** This node uses a Large Language Model (LLM) to route the workflow. It analyzes provided input data against a set of user-defined "Scenarios" — potential outcomes or categories — guided by high-level natural language "Instructions" that define the decision-making task. The LLM then determines which scenario best fits the current input context. Based on this AI-driven classification, the workflow execution proceeds down the specific output path corresponding to the chosen scenario. This node is particularly useful for tasks like user intent recognition, complex conditional routing, or nuanced situational decision-making where simple, predefined rules — as in the Condition Node — are insufficient.
* **Configuration Parameters**
  * **Model**: Specifies the AI model from a chosen service that will perform the analysis and scenario classification.
  * **Instructions**: Define the overall goal or task for the LLM in natural language — e.g., "Determine if the user's request is about sales, support, or general inquiry."
  * **Input**: Specify the data, often text from a previous step or user input, using `{{ variables }}`, that the LLM will analyze to make its routing decision.
  * **Scenarios**: Configure an array defining the possible outcomes or distinct paths the workflow can take. Each scenario is described in natural language — e.g., "Sales Inquiry," "Support Request," "General Question" — and each corresponds to a unique output anchor on the node.
* **Inputs:** Requires the `Input` data for analysis and the `Instructions` to guide the LLM.
* **Outputs:** Provides multiple output anchors, one for each defined `Scenario`. The workflow continues along the specific path connected to the output anchor that the LLM determines best matches the input.

<figure><picture><source srcset="/files/TTnlUWwA1CryG6WTOD6O" media="(prefers-color-scheme: dark)"><img src="/files/McKPm1RJivg1vqB86c2s" alt="" width="375"></picture><figcaption></figcaption></figure>

***

### **9. Iteration Node**

Executes a defined "sub-flow" — a sequence of nodes nested within it — for each item in an input array, implementing a "for-each" loop."

* **Functionality:** This node is designed for processing collections of data. It takes an array, either provided directly or referenced via a variable, as its input. For every individual element within that array, the Iteration Node sequentially executes the sequence of other nodes that are visually placed inside its boundaries on the canvas.
* **Configuration Parameters**
  * **Array Input**: Specifies the input array that the node will iterate over. This is provided by referencing a variable that holds an array from a previous node's output or from the `$flow.state` — e.g., `{{ $flow.state.itemList }}`.
* **Inputs:** Requires an array to be supplied to its `Array Input` parameter.
* **Outputs:** Provides a single output anchor that becomes active only after the nested sub-flow has completed execution for all items in the input array. The data passed through this output can include aggregated results or the final state of variables modified within the loop, depending on the design of the sub-flow. Nodes placed inside the iteration block have their own distinct input and output connections that define the sequence of operations for each item.

<figure><picture><source srcset="/files/M02XJtasmgvKmHRqvVR9" media="(prefers-color-scheme: dark)"><img src="/files/oxFDouqnWzZW5FfIXQ08" alt="" width="563"></picture><figcaption></figcaption></figure>

***

### **10. Loop Node**

Explicitly redirects the workflow execution back to a previously executed node.

* **Functionality:** This node enables the creation of cycles or iterative retries within a workflow. When the execution flow reaches the Loop Node, it does not proceed forward to a new node; instead, it "jumps" back to a specified target node that has already been executed earlier in the current workflow run. This action causes the re-execution of that target node and any subsequent nodes in that part of the flow.
* **Configuration Parameters**
  * **Loop Back To**: Selects the unique ID of a previously executed node within the current workflow to which the execution should return.
  * **Max Loop Count**: Defines the maximum number of times this loop operation can be performed within a single workflow execution, safeguarding against infinite cycles. The default value is 5.
* **Inputs:** Receives the execution signal to activate. It internally tracks the number of times the loop has occurred for the current execution.
* **Outputs:** This node does not have a standard forward-pointing output anchor, as its primary function is to redirect the execution flow backward to the `Loop Back To` target node, from where the workflow then continues.

<figure><picture><source srcset="/files/YcDZS56NEEg4k2N8PhwE" media="(prefers-color-scheme: dark)"><img src="/files/yUBIO2cYRk7iMi3dkUBS" alt="" width="375"></picture><figcaption></figcaption></figure>

***

### **11. Human Input Node**

Pauses the workflow execution to request explicit input, approval, or feedback from a human user — a key component for Human-in-the-Loop (HITL) processes.

* **Functionality:** This node halts the automated progression of the workflow and presents information or a question to a human user, via the chat interface. The content displayed to the user can either be a predefined, static text or dynamically generated by a LLM based on the current workflow context. The user is provided with distinct action choices — e.g., "Proceed," "Reject" — and, if enabled, a field to provide textual feedback. Once the user makes a selection and submits their response, the workflow resumes execution along the specific output path corresponding to their chosen action.
* **Configuration Parameters**
  * **Description Type**: Determines how the message or question presented to the user is generated — either `Fixed` (static text) or `Dynamic` (generated by an LLM).
    * **If Description Type is `Fixed`**
      * **Description**: This field contains the exact text to be displayed to the user. It supports the insertion of dynamic data using `{{ variables }}`
    * **If `Description Type` is `Dynamic`**
      * **Model**: Selects the AI model from a chosen service that will generate the user-facing message.
      * **Prompt**: Provides the instructions or prompt for the selected LLM to generate the message shown to the user.
  * **Feedback:** If enabled, the user will be prompted with a feedback window to leave their feedback, and this feedback will be appended to the node's output.
* **Inputs:** Receives the execution signal to pause the workflow. It can utilize data from previous steps or `$flow.state` through variables in the `Description` or `Prompt` fields if configured for dynamic content.
* **Outputs:** Provides two output anchors, each corresponding to a distinct user action — an anchor for "proceed" and another for "reject". The workflow continues along the path connected to the anchor matching the user's selection.

<figure><picture><source srcset="/files/JpLWBi2prlHo4ODUGxJ3" media="(prefers-color-scheme: dark)"><img src="/files/x5to4wddbVZzjKyEr3vL" alt="" width="375"></picture><figcaption></figcaption></figure>

***

### **12. Direct Reply Node**

Sends a final message to the user and terminates the current execution path.

* **Functionality:** This node serves as an endpoint for a specific branch or the entirety of a workflow. It takes a configured message — which can be static text or dynamic content from a variable — and delivers it directly to the end-user through the chat interface. Upon sending this message, the execution along this particular path of the workflow concludes; no further nodes connected from this point will be processed.
* **Configuration Parameters**
  * **Message**: Define the text or variable `{{ variable }}` that holds the content to be sent as the final reply to the user.
* **Inputs:** Receives the message content, which is sourced from a previous node's output or a value stored in `$flow.state`.
* **Outputs:** This node has no output anchors, as its function is to terminate the execution path after sending the reply.

<figure><picture><source srcset="/files/HDfu0oUZktOqjPzF0m4u" media="(prefers-color-scheme: dark)"><img src="/files/gs0srdlrnYxOdkhwaEe8" alt="" width="375"></picture><figcaption></figcaption></figure>

***

### **13. Custom Function Node**

Provides a mechanism for executing custom server-side Javascript code within the workflow.

* **Functionality:** This node allows to write and run arbitrary Javascript snippets, offering a efective way to implement complex data transformations, bespoke business logic, or interactions with resources not directly supported by other standard nodes. The executed code operates within a Node.js environment and has specific ways to access data:
  * **Input Variables:** Values passed via the `Input Variables` configuration are accessible within the function, typically prefixed with `$` — e.g., if an input variable `userid` is defined, it can be accessed as `$userid`.
  * **Flow Context:** Default flow configuration variables are available, such as `$flow.sessionId`, `$flow.chatId`, `$flow.chatflowId`, `$flow.input` — the initial input that started the workflow — and the entire `$flow.state` object.
  * **Custom Variables:** Any custom variables set up in Flowise — e.g., `$vars.<variable-name>`.
  * **Libraries:** The function can utilize any libraries that have been imported and made available within the Flowise backend environment.**The function must return a string value at the end of its execution**.
* **Configuration Parameters**
  * **Input Variables**: Configure an array of input definitions that will be passed as variables into the scope of your Javascript function. For each variable you wish to define, you will specify:
    * **Variable Name**: The name you will use to refer to this variable within your Javascript code, typically prefixed with a `$` — e.g., if you enter `myValue` here, you might access it as `$myValue` in the script, corresponding to how input schema properties are mapped.
    * **Variable Value**: The actual data to be assigned to this variable, which can be static text or, more commonly, a dynamic value sourced from the workflow — e.g., `{{ previousNode.output }}` or `{{ $flow.state.someKey }}`.
  * **Javascript Function**: The code editor field where the server-side Javascript function is written. This function must ultimately return a string value.
  * **Update Flow State**: Allows the node to modify the workflow's runtime state `$flow.state` during execution by updating pre-defined keys. This makes it possible, for example, to store this Custom Function node's string output under such a key, making it accessible to subsequent nodes.
* **Inputs:** Receives data through the variables configured in `Input Variables`. Can also implicitly access elements of the `$flow` context and `$vars`.
* **Outputs:** Produces the string value returned by the executed Javascript function.

<figure><picture><source srcset="/files/SP7hjty49uXWKd75TMQ6" media="(prefers-color-scheme: dark)"><img src="/files/kBYCpRMjlFfBY2Llcl6c" alt="" width="375"></picture><figcaption></figcaption></figure>

***

### **14. Execute Flow Node**

Enables the invocation and execution of another complete Flowise Chatflow or AgentFlow from within the current workflow.

* **Functionality:** This node functions as a sub-workflow caller, promoting modular design and reusability of logic. It allows the current workflow to trigger a separate, pre-existing workflow — identified by its name or ID within the Flowise instance — pass an initial input to it, optionally override specific configurations of the target flow for that particular run, and then receive its final output back into the calling workflow to continue processing.
* **Configuration Parameters**
  * **Connect Credential**: Optionally provide Chatflow API credentials if the target flow being called requires specific authentication or permissions for execution.
  * **Select Flow**: Specify the particular Chatflow or AgentFlow that this node will execute from the list of available flows in your Flowise instance.
  * **Input**: Define the data — static text or `{{ variable }}` — that will be passed as the primary input to the target workflow when it is invoked.
  * **Override Config**: Optionally provide a JSON object containing parameters that will override the default configuration of the target workflow specifically for this execution instance — e.g., temporarily changing a model or prompt used in the sub-flow.
  * **Base URL**: Optionally specify an alternative base URL for the Flowise instance that hosts the target flow. This is useful in distributed setups or when flows are accessed via different routes, defaulting to the current instance's URL if not set.
  * **Return Response As**: Determine how the final output from the executed sub-flow should be categorized when it's returned to the current workflow — as a `User Message` or `Assistant Message`.
  * **Update Flow State**: Allows the node to modify the workflow's runtime state `$flow.state` during execution by updating pre-defined keys. This makes it possible, for example, to store this Execute Flow node's output under such a key, making it accessible to subsequent nodes.
* **Inputs:** Requires the selection of a target flow and the `Input` data for it.
* **Outputs:** Produces the final output returned by the executed target workflow, formatted according to the `Return Response As` setting.

<figure><picture><source srcset="/files/uJyJPJ0SKJ46WfTVsgZV" media="(prefers-color-scheme: dark)"><img src="/files/u4opOBXTjpEJcSzpcixo" alt="" width="375"></picture><figcaption></figcaption></figure>

## Understanding Flow State

A key architectural feature enabling the flexibility and data management capabilities of AgentFlow V2 is the **Flow State**. This mechanism provides a way to manage and share data dynamically throughout the execution of a single workflow instance.

### **What is Flow State?**

* Flow State (`$flow.state`) is a **runtime, key-value store** that is shared among the nodes in a single execution.
* It functions as temporary memory or a shared context that exists only for the duration of that particular run/execution.

### **Purpose of Flow State**

The primary purpose of `$flow.state` is to enable **explicit data sharing and communication between nodes, especially those that may not be directly connected** in the workflow graph, or when data needs to be intentionally persisted and modified across multiple steps. It addresses several common orchestration challenges:

1. **Passing Data Across Branches:** If a workflow splits into conditional paths, data generated or updated in one branch can be stored in `$flow.state` to be accessed later if the paths merge or if other branches need that information.
2. **Accessing Data Across Non-Adjacent Steps:** Information initialized or updated by an early node can be retrieved by a much later node without needing to pass it explicitly through every intermediate node's inputs and outputs.

### **How Flow State Works**

1. **Initialization / Declaration of Keys**
   * All state keys that will be used throughout the workflow **must be initialized** with their default (even if empty) values using the `Flow State` parameter within the **Start node**. This step effectively declares the schema or structure of your `$flow.state` for that workflow. You define the initial key-value pairs here.

<figure><picture><source srcset="/files/rUGbe52ghDWsTmVe0gQM" media="(prefers-color-scheme: dark)"><img src="/files/RlQyje5Xjccv5m5JXBTF" alt=""></picture><figcaption></figcaption></figure>

2. **Updating State / Modifying Existing Keys**

* Many operational nodes — e.g., `LLM`, `Agent`, `Tool`, `HTTP`, `Retriever`, `Custom Function` — include an `Update Flow State` parameter in their configuration.
* This parameter allows the node to **modify the values of pre-existing keys** within `$flow.state`.
* The value can be static text, the direct output of the current node, output from previous node, and many other variables. Type `{{` will show all the available variables.
* When the node executes successfully, it **updates** the specified key(s) in `$flow.state` with the new value(s). **New keys cannot be created by operational nodes; only pre-defined keys can be updated.**

<figure><picture><source srcset="/files/qBkKkjHmc5QhPuiBD3Wg" media="(prefers-color-scheme: dark)"><img src="/files/cg5Clb78WqB7QPPrIFuO" alt=""></picture><figcaption></figcaption></figure>

3. **Reading from State**

* Any node input parameter that accepts variables can read values from the Flow State.
* Use the specific syntax: `{{ $flow.state.yourKey }}` — replace `yourKey` with the actual key name that was initialized in the Start Node.
* For example, an LLM node's prompt might include `"...based on the user status: {{ $flow.state.customerStatus }}"`.

<figure><picture><source srcset="/files/s4zMtBKSkethZJBs75By" media="(prefers-color-scheme: dark)"><img src="/files/9BekM8Iw7423YECpZAG4" alt=""></picture><figcaption></figcaption></figure>

### **Scope and Persistence:**

* It is created and initialized when a workflow execution begins and is destroyed when that specific execution ends.
* It does **not** persist across different user sessions or separate runs of the same workflow.
* Each concurrent execution of the workflow maintains its own independent `$flow.state`.

## Video Resources

{% embed url="<https://youtu.be/SLVVDUIbIBE?si=VU1m_btfDzVNl-PP>" %}

{% embed url="<https://youtu.be/h9N9wCrP9u4?si=8-9a9fktpxAykXXH>" %}


# Agentflow V1 (Deprecating)

Learn about how to build agentic systems in Flowise

## Introducing Agentic Systems in Flowise

Flowise's Agentflows section provides a platform for building agent-based systems that can interact with external tools and data sources.

Currently, Flowise offers two approaches for designing these systems: [**Multi-Agents**](#user-content-fn-1)[^1] and [**Sequential Agents**](#user-content-fn-2)[^2]. These approaches provide different levels of control and complexity, allowing you to choose the best fit for your needs.

<figure><img src="/files/mqqBM1pCdjUygS8NViMn" alt=""><figcaption><p>Flowise APP</p></figcaption></figure>

{% hint style="success" %}
This documentation will explore both the Sequential Agent and Multi-Agent approaches, explaining their features and how they can be used to build different types of conversational workflows.
{% endhint %}

[^1]: **Multi-Agents**, built on top of the Sequential Agent architecture, simplify the process of building and managing teams of agents by pre-configuring core elements and providing a higher-level abstraction.

[^2]: **Sequential Agents** provide developers with direct access to the underlying workflow structure, enabling granular control over every step of the conversation flow and offering maximum flexibility for building highly customized conversational applications.


# Multi-Agents

Learn how to use Multi-Agents in Flowise, written by @toi500

This guide intends to provide an introduction of the multi-agent AI system architecture within Flowise, detailing its components, operational constraints, and workflow.

## Concept

Analogous to a team of domain experts collaborating on a complex project, a multi-agent system uses the principle of specialization within artificial intelligence.

This multi-agent system utilizes a hierarchical, sequential workflow, maximizing efficiency and specialization.

### 1. System Architecture

We can define the multi-agent AI architecture as a scalable AI system capable of handling complex projects by breaking them down into manageable sub-tasks.

In Flowise, a multi-agent system comprises two primary nodes or agent types and a user, interacting in a hierarchical graph to process requests and deliver a targeted outcome:

1. **User:** The user acts as the **system's starting point**, providing the initial input or request. While a multi-agent system can be designed to handle a wide range of requests, it's important that these user requests align with the system's intended purpose. Any request falling outside this scope can lead to inaccurate results, unexpected loops, or even system errors. Therefore, user interactions, while flexible, should always align with the system's core functionalities for optimal performance.
2. **Supervisor AI:** The Supervisor acts as the **system's orchestrator**, overseeing the entire workflow. It analyzes user requests, decomposes them into a sequence of sub-tasks, assigns these sub-tasks to the specialized worker agents, aggregates the results, and ultimately presents the processed output back to the user.
3. **Worker AI Team:** This team consists of specialized AI agents, or Workers, each instructed - via prompt messages - to handle a specific task within the workflow. These Workers operate independently, receiving instructions and data from the Supervisor, **executing their specialized functions**, using tools as needed, and returning the results to the Supervisor.

<figure><img src="/files/4V7hAhCMaUn9fr8oEcRZ" alt=""><figcaption></figcaption></figure>

### 2. Operational Constraints

To maintain order and simplicity, this multi-agent system operates under two important constraints:

* **One task at a time:** The Supervisor is intentionally designed to focus on a single task at a time. It waits for the active Worker to complete its task and return the results before it analyzes the next step and delegates the subsequent task. This ensures each step is completed successfully before moving on, preventing overcomplexity.
* **One Supervisor per flow:** While it's theoretically possible to implement a set of nested multi-agent systems to form a more sophisticated hierarchical structure for highly complex workflows, what LangChain defines as "[Hierarchical Agent Teams](https://github.com/langchain-ai/langgraph/blob/main/examples/multi_agent/hierarchical_agent_teams.ipynb)", with a top-level supervisor and mid-level supervisors managing teams of workers, Flowise's multi-agent systems currently operate with a single Supervisor.

{% hint style="info" %}
These two constraints are important when **planning your application's workflow**. If you try to design a workflow where the Supervisor needs to delegate multiple tasks simultaneously, in parallel, the system won't be able to handle it and you'll encounter an error.
{% endhint %}

## The Supervisor

The Supervisor, as the agent governing the overall workflow and responsible for delegating tasks to the appropriate Worker, requires a set of components to function correctly:

* **Chat Model capable of function calling** to manage the complexities of task decomposition, delegation, and result aggregation.
* **Agent Memory (optional)**: While the Supervisor can function without Agent Memory, this node can significantly enhance workflows that require access to past Supervisor states. This **state preservation** could allow the Supervisor to resume the job from a specific point or leverage past data for improved decision-making.

<figure><img src="/files/uLxKNTswqRH3pVvSEX5p" alt=""><figcaption></figcaption></figure>

### Supervisor Prompt

By default, the Supervisor Prompt is worded in a way that instructs the Supervisor to analyze user requests, decompose them into a sequence of sub-tasks, and assign these sub-tasks to the specialized worker agents.

While the Supervisor Prompt is customizable to fit specific application needs, it always requires the following two key elements:

* **The {team\_members} Variable:** This variable is crucial for the Supervisor's understanding of the available workforce since it provides the Supervisor with list of Worker names. This allows the Supervisor to diligently delegate tasks to the most appropriate Worker based on their expertise.
* **The "FINISH" Keyword:** This keyword serves as a signal within the Supervisor Prompt. It indicates when the Supervisor should consider the task complete and present the final output to the user. Without a clear "FINISH" directive, the Supervisor might continue delegating tasks unnecessarily or fail to deliver a coherent and finalized result to the user. It signals that all necessary sub-tasks have been executed and the user's request has been fulfilled.

<figure><img src="/files/7hQeDNcfHIHpDl2QSL7P" alt="" width="375"><figcaption></figcaption></figure>

{% hint style="info" %}
It's important to understand that the Supervisor plays a very distinct role from Workers. Unlike Workers, which can be tailored with highly specific instructions, the **Supervisor operates most effectively with general directives, which allow it to plan and delegate tasks as it deems appropriate.** If you're new to multi-agent systems, we recommend sticking with the default Supervisor prompt
{% endhint %}

### Understanding Recursion Limit in Supervisor node:

This parameter restricts the maximum depth of nested function calls within our application. In our current context, **it limits how many times the Supervisor can trigger itself within a single workflow execution**. This is important for preventing unbounded recursion and ensuring resources are used efficiently.

<figure><img src="/files/vtzRzuiyuUhuVzSbvjB9" alt="" width="375"><figcaption></figcaption></figure>

### How the Supervisor works

Upon receiving a user query, the Supervisor initiates the workflow by analyzing the request and discerning the user's intended outcome.

Then, leveraging the `{team_members}` variable in the Supervisor Prompt, which only provides a list of available Worker AI names, the Supervisor infers each Worker's specialty and strategically selects the most suitable Worker for each task within the workflow.

{% hint style="info" %}
Since the Supervisor only has the Workers' names to infer their functionality inside the workflow, it is very important that those names are set accordingly. **Clear, concise, and descriptive names that accurately reflect the Worker's role or area of expertise are crucial for the Supervisor to make informed decisions when delegating tasks.** This ensures that the right Worker is selected for the right job, maximizing the system's accuracy in fulfilling the user's request.
{% endhint %}

***

## **The Worker**

The Worker, as a specialized agent instructed to handle a specific task within the system, requires two essential components to function correctly:

* **A Supervisor:** Each Worker must be connected to the Supervisor so it can be called upon when a task needs to be delegated. This connection establishes the essential hierarchical relationship within the multi-agent system, ensuring that the Supervisor can efficiently distribute work to the appropriate specialized Workers.
* **A Chat Model node capable of function calling**: By default, Workers inherit the Supervisor's Chat Model node unless assigned one directly. This function-calling capability enables the Worker to interact with tools designed for its specialized task.

<figure><img src="/files/n0mNEG14afrDuDvpPJWt" alt="" width="375"><figcaption></figcaption></figure>

{% hint style="info" %}
The ability to assign **different Chat Models to each Worker** provides significant flexibility and optimization opportunities for our application. By selecting [Chat Models](/integrations/langchain/chat-models) tailored to specific tasks, we can leverage more cost-effective solutions for simpler tasks and reserve specialized, potentially more expensive, models when truly necessary.
{% endhint %}

### Understanding Max Iteration parameter in Workers

[LangChain](https://python.langchain.com/v0.1/docs/modules/agents/how_to/max_iterations/) refers to `Max Iterations Cap` as a important control mechanism for preventing haywire within an agentic system. In our current this context, it serves us as a guardrail against excessive, potentially infinite, interactions between the Supervisor and Worker.

Unlike the Supervisor node's `Recursion Limit`, which restricts how many times the Supervisor can call itself, the Worker node's `Max Iteration` parameter limits how many times a Supervisor can iterated or query a specific Worker.

By capping or limiting the Max Iteration, we ensure that costs remain under control, even in cases of unexpected system behavior.

***

## Example: A practical user case

Now that we've established a foundational understanding of how Multi-Agent systems work within Flowise, let's explore a practical application.

Imagine a **Lead Outreach multi-agent system** (available in the Marketplace) designed to automate the process of identifying, qualifying, and engaging with potential leads. This system would utilize a Supervisor to orchestrate the following two Workers:

* **Lead Researcher:** This Worker, using the Google Search Tool, will be responsible for gathering potential leads based on user-defined criteria.
* **Lead Sales Generator:** This Worker will utilize the information gathered by the Lead Researcher to create personalized email drafts for the sales team.

<figure><img src="/files/ddc8AVQ5k0DCTZI3yFFB" alt=""><figcaption></figcaption></figure>

**Background:** A user working at Solterra Renewables wants to gather available information about Evergreen Energy Group, a reputable renewable energy company located in the UK, and target its CEO, Amelia Croft, as a potential lead.

**User Request:** The Solterra Renewables employee provides the following query to the multi-agent system: "*I need information about Evergreen Energy Group and Amelia Croft as a potential new customer for our business.*"

1. **Supervisor:**
   * The Supervisor receives the user request and delegates the "Lead Research" task to the `Lead Researcher Worker`.
2. **Lead Researcher Worker:**
   * The Lead Researcher Worker, using the Google Search Tool, gathers information about Evergreen Energy Group, focusing on:
     * Company background, industry, size, and location.
     * Recent news and developments.
     * Key executives, including confirming Amelia Croft's role as CEO.
   * The Lead Researcher sends the gathered information back to the `Supervisor`.
3. **Supervisor:**
   * The Supervisor receives the research data from the Lead Researcher Worker and confirms that Amelia Croft is a relevant lead.
   * The Supervisor delegates the "Generate Sales Email" task to the `Lead Sales Generator Worker`, providing:
     * The research information on Evergreen Energy Group.
     * Amelia Croft's email.
     * Context about Solterra Renewables.
4. **Lead Sales Generator Worker:**
   * The Lead Sales Generator Worker crafts a personalized email draft tailored to Amelia Croft, taking into account:
     * Her role as CEO and the relevance of Solterra Renewables' services to her company.
     * Information from the research about Evergreen Energy Group's current focus or projects.
   * The Lead Sales Generator Worker sends the completed email draft back to the `Supervisor`.
5. **Supervisor:**
   * The Supervisor receives the generated email draft and issues the "FINISH" directive.
   * The Supervisor outputs the email draft back to the user, the `Solterra Renewables employee`.
6. **User Receives Output:** The Solterra Renewables employee receives a personalized email draft ready to be reviewed and sent to Amelia Croft.

## Video Tutorials

Here, you'll find a list of video tutorials from [Leon's YouTube channel](https://www.youtube.com/@leonvanzyl) showing how to build multi-agent applications in Flowise using no-code.

{% embed url="<https://www.youtube.com/watch?ab_channel=LeonvanZyl&v=284Z8k7yJRE>" %}

{% embed url="<https://www.youtube.com/watch?ab_channel=LeonvanZyl&v=MaqcO15y-Vs>" %}

{% embed url="<https://www.youtube.com/watch?ab_channel=LeonvanZyl&v=eAH7LDGMVEs>" %}


# Sequential Agents

Learn the Fundamentals of Sequential Agents in Flowise, written by @toi500

This guide offers a complete overview of the Sequential Agent AI system architecture within Flowise, exploring its core components and workflow design principles.

{% hint style="warning" %}
**Disclaimer**: This documentation is intended to help Flowise users understand and build conversational workflows using the Sequential Agent system architecture. It is not intended to be a comprehensive technical reference for the LangGraph framework and should not be interpreted as defining industry standards or core LangGraph concepts.
{% endhint %}

## Concept

Built on top of [LangGraph](https://www.langchain.com/langgraph), Flowise's Sequential Agents architecture facilitates the **development of conversational agentic systems by structuring the workflow as a directed cyclic graph (DCG)**, allowing controlled loops and iterative processes.

This graph, composed of interconnected nodes, defines the sequential flow of information and actions, enabling the agents to process inputs, execute tasks, and generate responses in a structured manner.

<figure><img src="/files/CZSd6wZfzsASWZwHneow" alt=""><figcaption></figcaption></figure>

### Understanding Sequential Agents' DCG Architecture

This architecture simplifies the management of complex conversational workflows by defining a clear and understandable sequence of operations through its DCG structure.

Let's explore some key elements of this approach:

{% tabs %}
{% tab title="Core Principles" %}

* **Node-based processing:** Each node in the graph represents a discrete processing unit, encapsulating its own functionality like language processing, tool execution, or conditional logic.
* **Data flow as connections:** Edges in the graph represent the flow of data between nodes, where the output of one node becomes the input for the subsequent node, enabling a chain of processing steps.
* **State management:** State is managed as a shared object, persisting throughout the conversation. This allows nodes to access relevant information as the workflow progresses.
  {% endtab %}

{% tab title="Terminology" %}

* **Flow:** The movement or direction of data within the workflow. It describes how information passes between nodes during a conversation.
* **Workflow:** The overall design and structure of the system. It's the blueprint that defines the sequence of nodes, their connections, and the logic that orchestrates the conversation flow.
* **State:** A shared data structure that represents the current snapshot of the conversation. It includes the conversation history `state.messages` and any custom State variables defined by the user.
* **Custom State:** User-defined key-value pairs added to the state object to store additional information relevant to the workflow.
* **Tool:** An external system, API, or service that can be accessed and executed by the workflow to perform specific tasks, such as retrieving information, processing data, or interacting with other applications.
* **Human-in-the-Loop (HITL):** A feature that allows human intervention in the workflow, primarily during tool execution. It enables a human reviewer to approve or reject a tool call before it's executed.
* **Parallel node execution:** It refers to the ability to execute multiple nodes concurrently within a workflow by using a branching mechanism. This means that different branches of the workflow can process information or interact with tools simultaneously, even though the overall flow of execution remains sequential.
  {% endtab %}
  {% endtabs %}

***

## Sequential Agents vs Multi-Agents

While both Multi-Agent and Sequential Agent systems in Flowise are built upon the LangGraph framework and share the same fundamental principles, the Sequential Agent architecture provides a [lower level of abstraction](#user-content-fn-1)[^1], offering more granular control over every step of the workflow.

**Multi-Agent systems**, which are characterized by a hierarchical structure with a central supervisor agent delegating tasks to specialized worker agents, **excel at handling complex workflows by breaking them down into manageable sub-tasks**. This decomposition into sub-tasks is made possible by pre-configuring core system elements under the hood, such as condition nodes, which would require manual setup in a Sequential Agent system. As a result, users can more easily build and manage teams of agents.

In contrast, **Sequential Agent systems** operate like a streamlined assembly line, where data flows sequentially through a chain of nodes, making them ideal for tasks demanding a precise order of operations and incremental data refinement. Compared to the Multi-Agent system, its lower-level access to the underlying workflow structure makes it fundamentally more **flexible and customizable, offering parallel node execution and full control over the system logic**, incorporating conditions, state, and loop nodes into the workflow, allowing for the creation of new dynamic branching capabilities.

### Introducing State, Loop and Condition Nodes

Flowise's Sequential Agents offer new capabilities for creating conversational systems that can adapt to user input, make decisions based on context, and perform iterative tasks.

These capabilities are made possible by the introduction of four new core nodes; the State Node, the Loop Node, and two Condition Nodes.

<figure><img src="/files/rRs5gaPFzvNdAweJeX8p" alt=""><figcaption></figcaption></figure>

* **State Node:** We define State as a shared data structure that represents the current snapshot of our application or workflow. The State Node allows us to **add a custom State** to our workflow from the start of the conversation. This custom State is accessible and modifiable by other nodes in the workflow, enabling dynamic behavior and data sharing.
* **Loop Node:** This node **introduces controlled cycles** within the Sequential Agent workflow, enabling iterative processes where a sequence of nodes can be repeated based on specific conditions. This allows agents to refine outputs, gather additional information from the user, or perform tasks multiple times.
* **Condition Nodes:** The Condition and Condition Agent Node provide the necessary control to **create complex conversational flows with branching paths**. The Condition Node evaluates conditions directly, while the Condition Agent Node uses an agent's reasoning to determine the branching logic. This allows us to dynamically guide the flow's behavior based on user input, the custom State, or results of actions taken by other nodes.

### Choosing the right system

Selecting the ideal system for your application depends on understanding your specific workflow needs. Factors like task complexity, the need for parallel processing, and your desired level of control over data flow are all key considerations.

* **For simplicity:** If your workflow is relatively straightforward, where tasks can be completed one after the other and therefore does not require parallel node execution or Human-in-the-Loop (HITL), the Multi-Agent approach offers ease of use and quick setup.
* **For flexibility:** If your workflow needs parallel execution, dynamic conversations, custom State management, and the ability to incorporate HITL, the **Sequential Agent** approach provides the necessary flexibility and control.

Here's a table comparing Multi-Agent and Sequential Agent implementations in Flowise, highlighting key differences and design considerations:

<table><thead><tr><th width="173.33333333333331"></th><th width="281">Multi-Agent</th><th>Sequential Agent</th></tr></thead><tbody><tr><td>Structure</td><td><strong>Hierarchical</strong>; Supervisor delegates to specialized Workers.</td><td><strong>Linear, cyclic and/or</strong> <strong>branching</strong>; nodes connect in a sequence, with conditional logic for branching.</td></tr><tr><td>Workflow</td><td>Flexible; designed for breaking down a complex task into a <strong>sequence of sub-tasks</strong>, completed one after another.</td><td>Highly flexible; <strong>supports parallel node execution</strong>, complex dialogue flows, branching logic, and loops within a single conversation turn.</td></tr><tr><td>Parallel Node Execution</td><td><strong>No</strong>; Supervisor handles one task at a time.</td><td><strong>Yes</strong>; can trigger multiple actions in parallel within a single run.</td></tr><tr><td>State Management</td><td><strong>Implicit</strong>; State is in place, but is not explicitly managed by the developer.</td><td><strong>Explicit</strong>; State is in place, and developers can define and manage an initial or custom State using the State Node and the "Update State" field in various nodes.</td></tr><tr><td>Tool Usage</td><td><strong>Workers</strong> can access and use tools as needed.</td><td>Tools are accessed and executed through <strong>Agent Nodes</strong> and <strong>Tool Nodes</strong>.</td></tr><tr><td>Human-in-the-Loop (HITL)</td><td>HITL is <strong>not supported.</strong></td><td><strong>Supported</strong> through the Agent Node and Tool Node's "Require Approval" feature, allowing human review and approval or rejection of tool execution.</td></tr><tr><td>Complexity</td><td>Higher level of abstraction; <strong>simplifies workflow design.</strong></td><td>Lower level of abstraction; <strong>more complex workflow design</strong>, requiring careful planning of node interactions, custom State management, and conditional logic.</td></tr><tr><td>Ideal Use Cases</td><td><ul><li>Automating linear processes (e.g., data extraction, lead generation).</li><li>Situations where sub-tasks need to be completed one after the other.</li></ul></td><td><ul><li>Building conversational systems with dynamic flows.</li><li>Complex workflows requiring parallel node execution or branching logic.</li><li>Situations where decision-making is needed at multiple points in the conversation.</li></ul></td></tr></tbody></table>

{% hint style="info" %}
**Note**: Even though Multi-Agent systems are technically a higher-level layer built upon the Sequential Agent architecture, they offer a distinct user experience and approach to workflow design. The comparison above treats them as separate systems to help you select the best option for your specific needs.
{% endhint %}

***

## Sequential Agents Nodes

Sequential Agents bring a whole new dimension to Flowise, **introducing 10 specialized nodes**, each serving a specific purpose, offering more control over how our conversational agents interact with users, process information, make decisions, and execute actions.

The following sections aim to provide a comprehensive understanding of each node's functionality, inputs, outputs, and best practices, ultimately enabling you to craft sophisticated conversational workflows for a variety of applications.

<figure><img src="/files/WjK4AIXnTsGksbGPkSSE" alt=""><figcaption></figcaption></figure>

***

## 1. Start Node

As its name implies, the Start Node is the **entry point for all workflows in the Sequential Agent architecture**. It receives the initial user query, initializes the conversation State, and sets the flow in motion.

<figure><img src="/files/gVHjFrnRTafDKGULv6QI" alt="" width="300"><figcaption></figcaption></figure>

### Understanding the Start Node

The Start Node ensures that our conversational workflows have the necessary setup and context to function correctly. **It's responsible for setting up key functionalitie**s that will be used throughout the rest of the workflow:

* **Defining the default LLM:** The Start Node requires us to specify a Chat Model (LLM) compatible with function calling, enabling agents in the workflow to interact with tools and external systems. It will be the default LLM used under the hood in the workflow.
* **Initializing Memory:** We can optionally connect an Agent Memory Node to store and retrieve conversation history, enabling more context-aware responses.
* **Setting a custom State:** By default, the State contains an immutable `state.messages` array, which acts as the transcript or history of the conversation between the user and the agents. The Start Node allows you to connect a custom State to the workflow adding a State Node, enabling the storage of additional information relevant to your workflow
* **Enabling moderation:** Optionally, we can connect Input Moderation to analyze the user's input and prevent potentially harmful content from being sent to the LLM.

### Inputs

<table><thead><tr><th width="212"></th><th width="102">Required</th><th>Description</th></tr></thead><tbody><tr><td>Chat Model</td><td><strong>Yes</strong></td><td>The default LLM that will power the conversation. Only compatible with <strong>models that are capable of function calling</strong>.</td></tr><tr><td>Agent Memory Node</td><td>No</td><td>Connect an Agent Memory Node to <strong>enable persistence and context preservation</strong>.</td></tr><tr><td>State Node</td><td>No</td><td>Connect a State Node to <strong>set a custom State</strong>, a shared context that can be accessed and modified by other nodes in the workflow.</td></tr><tr><td>Input Moderation</td><td>No</td><td>Connect a Moderation Node to <strong>filter content</strong> by detecting text that could generate harmful output, preventing it from being sent to the LLM.</td></tr></tbody></table>

### Outputs

The Start Node can connect to the following nodes as outputs:

* **Agent Node:** Routes the conversation flow to an Agent Node, which can then execute actions or access tools based on the conversation's context.
* **LLM Node:** Routes the conversation flow to an LLM Node for processing and response generation.
* **Condition Agent Node:** Connects to a Condition Agent Node to implement branching logic based on the agent's evaluation of the conversation.
* **Condition Node:** Connects to a Condition Node to implement branching logic based on predefined conditions.

### Best Practices

{% tabs %}
{% tab title="Pro Tips" %}
**Choose the right Chat Model**

Ensure your selected LLM supports function calling, a key feature for enabling agent-tool interactions. Additionally, choose an LLM that aligns with the complexity and requirements of your application. You can override the default LLM by setting it at the Agent/LLM/Condition Agent node level when necessary.

**Consider context and persistence**

If your use case demands it, utilize Agent Memory Node to maintain context and personalize interactions.
{% endtab %}

{% tab title="Potential Pitfalls" %}
**Incorrect Chat Model (LLM) selection**

* **Problem:** The Chat Model selected in the Start Node is not suitable for the intended tasks or capabilities of the workflow, resulting in poor performance or inaccurate responses.
* **Example:** A workflow requires a Chat Model with strong summarization capabilities, but the Start Node selects a model optimized for code generation, leading to inadequate summaries.
* **Solution:** Choose a Chat Model that aligns with the specific requirements of your workflow. Consider the model's strengths, weaknesses, and the types of tasks it excels at. Refer to the documentation and experiment with different models to find the best fit.

**Overlooking Agent Memory Node configuration**

* **Problem:** The Agent Memory Node is not properly connected or configured, resulting in the loss of conversation history data between sessions.
* **Example:** You intend to use persistent memory to store user preferences, but the Agent Memory Node is not connected to the Start Node, causing preferences to be reset on each new conversation.
* **Solution:** Ensure that the Agent Memory Node is connected to the Start Node and configured with the appropriate database (SQLite). For most use cases, the default SQLite database will be sufficient.

**Inadequate Input Moderation**

* **Problem:** The "Input Moderation" is not enabled or configured correctly, allowing potentially harmful or inappropriate user input to reach the LLM and generate undesirable responses.
* **Example:** A user submits offensive language, but the input moderation fails to detect it or is not set up at all, allowing the query to reach the LLM.
* **Solution:** Add and configure an input moderation node in the Start Node to filter out potentially harmful or inappropriate language. Customize the moderation settings to align with your specific requirements and use cases.
  {% endtab %}
  {% endtabs %}

## 2. Agent Memory Node

The Agent Memory Node **provides a mechanism for persistent memory storage**, allowing the Sequential Agent workflow to retain the conversation history `state.messages` and any custom State previously defined across multiple interactions

This long-term memory is essential for agents to learn from previous interactions, maintain context over extended conversations, and provide more relevant responses.

<figure><img src="/files/GqSUUksak9lvtjR9IQSo" alt="" width="299"><figcaption></figcaption></figure>

### Where the data is recorded

By default, Flowise utilizes its **built-in SQLite database** to store conversation history and custom state data, creating a "**checkpoints**" table to manage this persistent information.

#### Understanding the "checkpoints" table structure and data format

This table **stores snapshots of the system's State at various points during a conversation**, enabling the persistence and retrieval of conversation history. Each row represents a specific point or "checkpoint" in the workflow's execution.

<figure><img src="/files/FIPt6Szc2w2bZhqY6g0T" alt=""><figcaption></figcaption></figure>

#### Table structure

* **thread\_id:** A unique identifier representing a specific conversation session, **our session ID**. It groups together all checkpoints related to a single workflow execution.
* **checkpoint\_id:** A unique identifier for each execution step (node execution) within the workflow. It helps track the order of operations and identify the State at each step.
* **parent\_id:** Indicates the checkpoint\_id of the preceding execution step that led to the current checkpoint. This establishes a hierarchical relationship between checkpoints, allowing for the reconstruction of the workflow's execution flow.
* **checkpoint:** Contains a JSON string representing the current State of the workflow at that specific checkpoint. This includes the values of variables, the messages exchanged, and any other relevant data captured at that point in the execution.
* **metadata:** Provides additional context about the checkpoint, specifically related to node operations.

#### How it works

As a Sequential Agent workflow executes, the system records a checkpoint in this table for each significant step. This mechanism provides several benefits:

* **Execution tracking:** Checkpoints enable the system to understand the execution path and the order of operations within the workflow.
* **State management:** Checkpoints store the State of the workflow at each step, including variable values, conversation history, and any other relevant data. This allows the system to maintain contextual awareness and make informed decisions based on the current State.
* **Workflow resumption:** If the workflow is paused or interrupted (e.g., due to a system error or user request), the system can use the stored checkpoints to resume execution from the last recorded State. This ensures that the conversation or task continues from where it left off, preserving the user's progress and preventing data loss.

### **Inputs**

The Agent Memory Node has **no specific input connections**.

### Node Setup

<table><thead><tr><th width="189"></th><th width="107">Required</th><th>Description</th></tr></thead><tbody><tr><td>Database</td><td><strong>Yes</strong></td><td>The type of database used for storing conversation history. Currently, <strong>only SQLite is supported</strong>.</td></tr></tbody></table>

### Additional Parameters

<table><thead><tr><th width="189"></th><th width="107">Required</th><th>Description</th></tr></thead><tbody><tr><td>Database File Path</td><td>No</td><td>The file path to the SQLite database file. <strong>If not provided, the system will use a default location</strong>.</td></tr></tbody></table>

### **Outputs**

The Agent Memory Node interacts solely with the **Start Node**, making the conversation history available from the very beginning of the workflow.

### **Best Practices**

{% tabs %}
{% tab title="Pro Tips" %}
**Strategic use**

Employ Agent Memory only when necessary. For simple, stateless interactions, it might be overkill. Reserve it for scenarios where retaining information across turns or sessions is essential.
{% endtab %}

{% tab title="Potential Pitfalls" %}
**Unnecessary overhead**

* **The Problem:** Using Agent Memory for every interaction, even when not needed, introduces unnecessary storage and processing overhead. This can slow down response times and increase resource consumption.
* **Example:** A simple weather chatbot that provides information based on a single user request doesn't need to store conversation history.
* **Solution:** Analyze the requirements of your system and only utilize Agent Memory when persistent data storage is essential for functionality or user experience.
  {% endtab %}
  {% endtabs %}

***

## 3. State Node

The State Node, which can only be connected to the Start Node, **provides a mechanism to set a user-defined or custom State** into our workflow from the start of the conversation. This custom State is a JSON object that is shared and can be updated by nodes in the graph, passing from one node to another as the flow progresses.

<figure><img src="/files/8ehxUQlshchTC25XRja9" alt="" width="299"><figcaption></figcaption></figure>

### Understanding the State Node

By default, the State includes a `state.messages` array, which acts as our conversation history. This array stores all messages exchanged between the user and the agents, or any other actors in the workflow, preserving it throughout the workflow execution.

Since by definition this `state.messages` array is immutable and cannot be modified, **the purpose of the State Node is to allow us to define custom key-value pairs**, expanding the state object to hold any additional information relevant to our workflow.

{% hint style="info" %}
When no **Agent Memory Node** is used, the State operates in-memory and is not persisted for future use.
{% endhint %}

### Inputs

The State Node has **no specific input connections**.

### Outputs

The State Node can only connect to the **Start Node**, allowing the setup of a custom State from the beginning of the workflow and allowing other nodes to access and potentially modify this shared custom State.

### Additional Parameters

<table><thead><tr><th width="157"></th><th width="113">Required</th><th>Description</th></tr></thead><tbody><tr><td>Custom State</td><td><strong>Yes</strong></td><td>A JSON object representing the <strong>initial custom State of the workflow</strong>. This object can contain any key-value pairs relevant to the application.</td></tr></tbody></table>

### How to set a custom State <a href="#alert-dialog-title" id="alert-dialog-title"></a>

Specify the **key**, **operation type**, and **default value** for the state object. The operation type can be either "Replace" or "Append".

* **Replace**
  1. Replace the existing value with the new value.
  2. If the new value is null, the existing value will be retained.
* **Append**
  1. Append the new value to the existing value.
  2. Default values can be empty or an array. Ex: \["a", "b"]
  3. Final value is an array.

#### Example using JS

{% code overflow="wrap" %}

```javascript
{
    aggregate: {
        value: (x, y) => x.concat(y), // here we append the new message to the existing messages
        default: () => []
    }
}
```

{% endcode %}

#### Example using Table

To define a custom State using the table interface in the State Node, follow these steps:

1. **Add item:** Click the "+ Add Item" button to add rows to the table. Each row represents a key-value pair in your custom State.
2. **Specify keys:** In the "Key" column, enter the name of each key you want to define in your state object. For example, you might have keys like "userName", "userLocation", etc.
3. **Choose operations:** In the "Operation" column, select the desired operation for each key. You have two options:
   * **Replace:** This will replace the existing value of the key with the new value provided by a node. If the new value is null, the existing value will be retained.
   * **Append:** This will append the new value to the existing value of the key. The final value will be an array.
4. **Set default values:** In the "Default Value" column, enter the initial value for each key. This value will be used if no other node provides a value for the key. The default value can be empty or an array.

#### Example Table

| Key      | Operation | Default Value |
| -------- | --------- | ------------- |
| userName | Replace   | null          |

<figure><img src="/files/WGTmcoTrydF5AVKCN7mN" alt="" width="375"><figcaption></figcaption></figure>

1. This table defines one key in the custom State: `userName`.
2. The `userName` key will use the "Replace" operation, meaning its value will be updated whenever a node provides a new value.
3. The `userName` key has a default value of *null,* indicating that it has no initial value.

{% hint style="info" %}
Remember that this table-based approach is an alternative to defining the custom State using JavaScript. Both methods achieve the same result.
{% endhint %}

#### Example using API

```json
{
    "question": "hello",
    "overrideConfig": {
        "stateMemory": [
            {
                "Key": "userName",
                "Operation": "Replace",
                "Default Value": "somevalue"
            }
        ]
    }
}
```

### Best Practices

{% tabs %}
{% tab title="Pro-Tips" %}
**Plan your custom State structure**

Before building your workflow, design the structure of your custom State. A well-organized custom State will make your workflow easier to understand, manage, and debug.

**Use meaningful key names**

Choose descriptive and consistent key names that clearly indicate the purpose of the data they hold. This will improve the readability of your code and make it easier for others (or you in the future) to understand how the custom State is being used.

**Keep custom State minimal**

Only store information in the custom State that is essential for the workflow's logic and decision-making.

**Consider State persistence**

If you need to preserve State across multiple conversation sessions (e.g., for user preferences, order history, etc.), use the Agent Memory Node to store the State in a persistent database.
{% endtab %}

{% tab title="Potential Pitfalls" %}
**Inconsistent State Updates**

* **Problem:** Updating the custom State in multiple nodes without a clear strategy can lead to inconsistencies and unexpected behavior.
* **Example**
  1. Agent 1 updates `orderStatus` to "Payment Confirmed".
  2. Agent 2, in a different branch, updates `orderStatus` to "Order Complete" without checking the previous status.
* **Solution:** Use Conditions Nodes to control the flow of the custom State updates and ensure that custom State transitions happen in a logical and consistent manner.
  {% endtab %}
  {% endtabs %}

***

## 4. Agent Node

The Agent Node is a **core component of the Sequential Agent architecture.** It acts as a decision-maker and orchestrator within our workflow.

<figure><img src="/files/KT5kc26MfeMua5yQydgx" alt="" width="268"><figcaption></figcaption></figure>

### Understanding the Agent Node

Upon receiving input from preceding nodes, which always includes the full conversation history `state.messages` and any custom State at that point in the execution, the Agent Node uses its defined "persona", established by the System Prompt, to determine if external tools are necessary to fulfill the user's request.

* If tools are required, the Agent Node autonomously selects and executes the appropriate tool. This execution can be automatic or, for sensitive tasks, require human approval (HITL) before proceeding. Once the tool completes its operation, the Agent Node receives the results, processes them using the designated Chat Model (LLM), and generates a comprehensive response.
* In cases where no tools are needed, the Agent Node directly leverages the Chat Model (LLM) to formulate a response based on the current conversation context.

### Inputs

<table><thead><tr><th width="195"></th><th width="107">Required</th><th>Description</th></tr></thead><tbody><tr><td>External Tools</td><td>No</td><td>Provides the Agent Node with <strong>access to a suite of external tools</strong>, enabling it to perform actions and retrieve information.</td></tr><tr><td>Chat Model</td><td>No</td><td>Add a new Chat Model to <strong>overwrite the default Chat Model</strong> (LLM) of the workflow. Only compatible with models that are capable of function calling.</td></tr><tr><td>Start Node</td><td><strong>Yes</strong></td><td>Receives the <strong>initial user input</strong>, along with the custom State (if set up) and the rest of the default <code>state.messages</code> array from the Start Node.</td></tr><tr><td>Condition Node</td><td><strong>Yes</strong></td><td>Receives input from a preceding Condition Node, enabling the Agent Node to <strong>take actions or guide the conversation based on the outcome of the Condition Node's evaluation</strong>.</td></tr><tr><td>Condition Agent Node</td><td><strong>Yes</strong></td><td>Receives input from a preceding Condition Agent Node, enabling the Agent Node to <strong>take actions or guide the conversation based on the outcome of the Condition Agent Node's evaluation</strong>.</td></tr><tr><td>Agent Node</td><td><strong>Yes</strong></td><td>Receives input from a preceding Agent Node, <strong>enabling chained agent actions</strong> and maintaining conversational context</td></tr><tr><td>LLM Node</td><td><strong>Yes</strong></td><td>Receives the output from LLM Node, enabling the Agent Node to <strong>process the LLM's response</strong>.</td></tr><tr><td>Tool Node</td><td><strong>Yes</strong></td><td>Receives the output from a Tool Node, enabling the Agent Node to <strong>process and integrate tool's outputs into its response</strong>.</td></tr></tbody></table>

{% hint style="info" %}
The **Agent Node requires at least one connection from the following nodes**: Start Node, Agent Node, Condition Node, Condition Agent Node, LLM Node, or Tool Node.
{% endhint %}

### Outputs

The Agent Node can connect to the following nodes as outputs:

* **Agent Node:** Passes control to a subsequent Agent Node, enabling the chaining of multiple agent actions within a workflow. This allows for more complex conversational flows and task orchestration.
* **LLM Node:** Passes the agent's output to an LLM Node, enabling further language processing, response generation, or decision-making based on the agent's actions and insights.
* **Condition Agent Node:** Directs the flow to a Condition Agent Node. This node evaluates the Agent Node's output and its predefined conditions to determine the appropriate next step in the workflow.
* **Condition Node:** Similar to the Condition Agent Node, the Condition Node uses predefined conditions to assess the Agent Node's output, directing the flow along different branches based on the outcome.
* **End Node:** Concludes the conversation flow.
* **Loop Node:** Redirects the flow back to a previous node, enabling iterative or cyclical processes within the workflow. This is useful for tasks that require multiple steps or involve refining results based on previous interactions. For example, you might loop back to an earlier Agent Node or LLM Node to gather additional information or refine the conversation flow based on the current Agent Node's output.

### Node Setup

<table><thead><tr><th width="201"></th><th width="101">Required</th><th>Description</th></tr></thead><tbody><tr><td>Agent Name</td><td><strong>Yes</strong></td><td>Add a descriptive name to the Agent Node to enhance workflow readability and easily <strong>target it back when using loops</strong> within the workflow.</td></tr><tr><td>System Prompt</td><td>No</td><td>Defines the <strong>agent's 'persona'</strong> and <strong>guides its behavior</strong>. For example, "<em>You are a customer service agent specializing in technical support</em> [...]."</td></tr><tr><td>Require Approval</td><td>No</td><td><strong>Activates the Human-in-the-loop (HITL) feature</strong>. If set to '<strong>True</strong>,' the Agent Node will request human approval before executing any tool. This is particularly valuable for sensitive operations or when human oversight is desired. Defaults to '<strong>False</strong>,' allowing the Agent Node to execute tools autonomously.</td></tr></tbody></table>

### Additional Parameters

<table><thead><tr><th width="200"></th><th width="102">Required</th><th>Description</th></tr></thead><tbody><tr><td>Human Prompt</td><td>No</td><td>This prompt is appended to the <code>state.messages</code> array as a human message. It allows us to <strong>inject a human-like message into the conversation flow</strong> after the Agent Node has processed its input and before the next node receives the Agent Node's output.</td></tr><tr><td>Approval Prompt</td><td>No</td><td><strong>A customizable prompt presented to the human reviewer when the HITL feature is active</strong>. This prompt provides context about the tool execution, including the tool's name and purpose. The variable <code>{tools}</code> within the prompt will be dynamically replaced with the actual list of tools suggested by the agent, ensuring the human reviewer has all necessary information to make an informed decision.</td></tr><tr><td>Approve Button Text</td><td>No</td><td>Customizes <strong>the text displayed on the button for approving tool execution</strong> in the HITL interface. This allows for tailoring the language to the specific context and ensuring clarity for the human reviewer.</td></tr><tr><td>Reject Button Text</td><td>No</td><td>Customizes the <strong>text displayed on the button for rejecting tool execution</strong> in the HITL interface. Like the Approve Button Text, this customization enhances clarity and provides a clear action for the human reviewer to take if they deem the tool execution unnecessary or potentially harmful.</td></tr><tr><td>Update State</td><td>No</td><td>Provides a <strong>mechanism to modify the shared custom State object within the workflow</strong>. This is useful for storing information gathered by the agent or influencing the behavior of subsequent nodes.</td></tr><tr><td>Max Iteration</td><td>No</td><td>Limits the <strong>number of iterations</strong> an Agent Node can make within a single workflow execution.</td></tr></tbody></table>

### Best Practices

{% tabs %}
{% tab title="Pro Tips" %}
**Clear system prompt**

Craft a concise and unambiguous System Prompt that accurately reflects the agent's role and capabilities. This guides the agent's decision-making and ensures it acts within its defined scope.

**Strategic tool selection**

Choose and configure the tools available to the Agent Node, ensuring they align with the agent's purpose and the overall goals of the workflow.

**HITL for sensitive tasks**

Utilize the 'Require Approval' option for tasks involving sensitive data, requiring human judgment, or carrying a risk of unintended consequences.

**Leverage custom State updates**

Update the custom State object strategically to store gathered information or influence the behavior of downstream nodes.
{% endtab %}

{% tab title="Potential Pitfalls" %}
**Agent inaction due to tool overload**

* **Problem:** When an Agent Node has access to a large number of tools within a single workflow execution, it might struggle to decide which tool is the most appropriate to use, even when a tool is clearly necessary. This can lead to the agent failing to call any tool at all, resulting in incomplete or inaccurate responses.
* **Example:** Imagine a customer support agent designed to handle a wide range of inquiries. You've equipped it with tools for order tracking, billing information, product returns, technical support, and more. A user asks, "What's the status of my order?" but the agent, overwhelmed by the number of potential tools, responds with a generic answer like, "I can help you with that. What's your order number?" without actually using the order tracking tool.
* **Solution**
  1. **Refine system prompts:** Provide clearer instructions and examples within the Agent Node's System Prompt to guide it towards the correct tool selection. If needed, emphasize the specific capabilities of each tool and the situations in which they should be used.
  2. **Limit tool choices per node:** If possible, break down complex workflows into smaller, more manageable segments, each with a more focused set of tools. This can help reduce the cognitive load on the agent and improve its tool-selection accuracy.

**Overlooking HITL for sensitive tasks**

* **Problem:** Failing to utilize the Agent Node's "Require Approval" (HITL) feature for tasks involving sensitive information, critical decisions, or actions with potential real-world consequences can lead to unintended outcomes or damage to user trust.
* **Example:** Your travel booking agent has access to a user's payment information and can automatically book flights and hotels. Without HITL, a misinterpretation of user intent or an error in the agent's understanding could result in an incorrect booking or unauthorized use of the user's payment details.
* **Solution**
  1. **Identify sensitive actions:** Analyze your workflow and identify any actions that involve accessing or processing sensitive data (e.g., payment info, personal details).
  2. **Implement "Require Approval":** For these sensitive actions, enable the "Require Approval" option within the Agent Node. This ensures that a human reviews the agent's proposed action and the relevant context before any sensitive data is accessed or any irreversible action is taken.
  3. **Design clear approval prompts:** Provide clear and concise prompts for human reviewers, summarizing the agent's intent, the proposed action, and the relevant information needed for the reviewer to make an informed decision.

**Unclear or incomplete system prompt**

* **Problem:** The System Prompt provided to the Agent Node lacks the necessary specificity and context to guide the agent effectively in carrying out its intended tasks. A vague or overly general prompt can lead to irrelevant responses, difficulty in understanding user intent, and an inability to leverage tools or data appropriately.
* **Example:** You're building a travel booking agent, and your System Prompt simply states "*You are a helpful AI assistant.*" This lacks the specific instructions and context needed for the agent to effectively guide users through flight searches, hotel bookings, and itinerary planning.
* **Solution:** Craft a detailed and context-aware System Prompt:

{% code overflow="wrap" %}

```
You are a travel booking agent. Your primary goal is to assist users in planning and booking their trips. 
- Guide them through searching for flights, finding accommodations, and exploring destinations.
- Be polite, patient, and offer travel recommendations based on their preferences.
- Utilize available tools to access flight data, hotel availability, and destination information.
```

{% endcode %}
{% endtab %}
{% endtabs %}

***

## 5. LLM Node

Like the Agent Node, the LLM Node is a **core component of the Sequential Agent architecture**. Both nodes utilize the same Chat Models (LLMs) by default, providing the same basic language processing capabilities, but the LLM Node distinguishes itself in these key areas.

<figure><img src="/files/jdzSpjvpkBqiknDCoVnc" alt="" width="341"><figcaption></figcaption></figure>

### Key advantages of the LLM Node

While a detailed comparison between the LLM Node and the Agent Node is available in [this section](#agent-node-vs.-llm-node-selecting-the-optimal-node-for-conversational-tasks), here's a brief overview of the **LLM Node's key advantages**:

* **Structured data:** The LLM Node provides a dedicated feature to define a JSON schema for its output. This makes it exceptionally easy to extract structured information from the LLM's responses and pass that data to consequent nodes in the workflow. The Agent Node does not have this built-in JSON schema feature
* **HITL:** While both nodes support HITL for tool execution, the LLM Node defers this control to the Tool Node itself, providing more flexibility in workflow design.

### Inputs

<table><thead><tr><th width="184"></th><th width="111">Required</th><th>Description</th></tr></thead><tbody><tr><td>Chat Model</td><td>No</td><td>Add a new Chat Model to <strong>overwrite the default Chat Model</strong> (LLM) of the workflow. Only compatible with models that are capable of function calling.</td></tr><tr><td>Start Node</td><td><strong>Yes</strong></td><td>Receives the <strong>initial user input</strong>, along with the custom State (if set up) and the rest of the default <code>state.messages</code> array from the Start Node.</td></tr><tr><td>Agent Node</td><td><strong>Yes</strong></td><td>Receives output from an Agent Node, which may include tool execution results or agent-generated responses.</td></tr><tr><td>Condition Node</td><td><strong>Yes</strong></td><td>Receives input from a preceding Condition Node, enabling the LLM Node to <strong>take actions or guide the conversation based on the outcome of the Condition Node's evaluation</strong>.</td></tr><tr><td>Condition Agent Node</td><td><strong>Yes</strong></td><td>Receives input from a preceding Condition Agent Node, enabling the LLM Node to <strong>take actions or guide the conversation based on the outcome of the Condition Agent Node's evaluation</strong>.</td></tr><tr><td>LLM Node</td><td><strong>Yes</strong></td><td>Receives output from another LLM Node, <strong>enabling chained reasoning</strong> or information processing across multiple LLM Nodes.</td></tr><tr><td>Tool Node</td><td><strong>Yes</strong></td><td>Receives output from a Tool Node, <strong>providing the results of tool execution for further processing</strong> or response generation.</td></tr></tbody></table>

{% hint style="info" %}
The **LLM Node requires at least one connection from the following nodes**: Start Node, Agent Node, Condition Node, Condition Agent Node, LLM Node, or Tool Node.
{% endhint %}

### **Node Setup**

<table><thead><tr><th width="240"></th><th width="118">Required</th><th>Description</th></tr></thead><tbody><tr><td>LLM Node Name</td><td><strong>Yes</strong></td><td>Add a descriptive name to the LLM Node to enhance workflow readability and easily <strong>target it back when using loops</strong> within the workflow.</td></tr></tbody></table>

### Outputs

The LLM Node can connect to the following nodes as outputs:

* **Agent Node:** Passes the LLM's output to an Agent Node, which can then use the information to decide on actions, execute tools, or guide the conversation flow.
* **LLM Node:** Passes the output to a subsequent LLM Node, enabling chaining of multiple LLM operations. This is useful for tasks like refining text generation, performing multiple analyses, or breaking down complex language processing into stages.
* **Tool Node**: Passes the output to a Tool Node, enabling the execution of a specific tool based on the LLM Node's instructions.
* **Condition Agent Node:** Directs the flow to a Condition Agent Node. This node evaluates the LLM Node's output and its predefined conditions to determine the appropriate next step in the workflow.
* **Condition Node:** Similar to the Condition Agent Node, the Condition Node uses predefined conditions to assess the LLM Node's output, directing the flow along different branches based on the outcome.
* **End Node:** Concludes the conversation flow.
* **Loop Node:** Redirects the flow back to a previous node, enabling iterative or cyclical processes within the workflow. This could be used to refine the LLM's output over multiple iterations.

### Additional Parameters

<table><thead><tr><th width="200"></th><th width="141">Required</th><th>Description</th></tr></thead><tbody><tr><td>System Prompt</td><td>No</td><td>Defines the <strong>agent's 'persona' and guides its behavior</strong>. For example, "<em>You are a customer service agent specializing in technical support</em> [...]."</td></tr><tr><td>Human Prompt</td><td>No</td><td>This prompt is appended to the <code>state.messages</code> array as a human message. It allows us to <strong>inject a human-like message into the conversation flow</strong> after the LLM Node has processed its input and before the next node receives the LLM Node's output.</td></tr><tr><td>JSON Structured Output</td><td>No</td><td>To instruct the LLM (Chat Model) to <strong>provide the output in JSON structure schema</strong> (Key, Type, Enum Values, Description).</td></tr><tr><td>Update State</td><td>No</td><td>Provides a <strong>mechanism to modify the shared custom State object within the workflow</strong>. This is useful for storing information gathered by the LLM Node or influencing the behavior of subsequent nodes.</td></tr></tbody></table>

### Best Practices

{% tabs %}
{% tab title="Pro Tips" %}
**Clear system prompt**

Craft a concise and unambiguous System Prompt that accurately reflects the LLM Node's role and capabilities. This guides the LLM Node's decision-making and ensures it acts within its defined scope.

**Optimize for structured output**

Keep your JSON schemas as straightforward as possible, focusing on the essential data elements. Only enable JSON Structured Output when you need to extract specific data points from the LLM's response or when downstream nodes require JSON input.

**Strategic tool selection**

Choose and configure the tools available to the LLM Node (via the Tool Node), ensuring they align with the application purpose and the overall goals of the workflow.

**HITL for sensitive tasks**

Utilize the 'Require Approval' option for tasks involving sensitive data, requiring human judgment, or carrying a risk of unintended consequences.

**Leverage State updates**

Update the custom State object strategically to store gathered information or influence the behavior of downstream nodes.
{% endtab %}

{% tab title="Potential Pitfalls" %}
**Unintentional tool execution due to Incorrect HITL setup**

* **Problem:** While the LLM Node can trigger Tool Nodes, it relies on the Tool Node's configuration for Human-in-the-Loop (HITL) approval. Failing to properly configure HITL for sensitive actions can lead to tools being executed without human review, potentially causing unintended consequences.
* **Example:** Your LLM Node is designed to interact with a tool that makes changes to user data. You intend to have a human review these changes before execution, but the connected Tool Node's "Require Approval" option is not enabled. This could result in the tool automatically modifying user data based solely on the LLM's output, without any human oversight.
* **Solution**
  1. **Double-Check tool node settings:** Always ensure that the "Require Approval" option is enabled within the settings of any Tool Node that handles sensitive actions.
  2. **Test HITL thoroughly:** Before deploying your workflow, test the HITL process to ensure that human review steps are triggered as expected and that the approval/rejection mechanism functions correctly.

**Overuse or misunderstanding of JSON structured output**

* **Problem:** While the LLM Node's JSON Structured Output feature is powerful, misusing it or not fully understanding its implications can lead to data errors.
* **Example:** You define a complex JSON schema for the LLM Node's output, even though the downstream tasks only require a simple text response. This adds unnecessary complexity and makes your workflow harder to understand and maintain. Additionally, if the LLM's output doesn't conform to the defined schema, it can cause errors in subsequent nodes.
* **Solution**
  1. **Use JSON output strategically:** Only enable JSON Structured Output when you have a clear need to extract specific data points from the LLM's response or when the downstream Tool Nodes require JSON input.
  2. **Keep schemas simple:** Design your JSON schemas to be as simple and concise as possible, focusing only on the data elements that are absolutely necessary for the task.
     {% endtab %}
     {% endtabs %}

***

## 6. Tool Node

The Tool Node is a valuable component of Flowise's Sequential Agent system, **enabling the integration and execution of external tools** within conversational workflows. It acts as a bridge between the language-based processing of LLM Nodes and the specialized functionalities of external tools, APIs, or services.

<figure><img src="/files/w707cztYvtbDPJlmOVeT" alt="" width="300"><figcaption></figcaption></figure>

### Understanding the Tool Node

The Tool Node's primary function is to **execute external tools** based on instructions received from an LLM Node and to **provide flexibility for Human-in-the-Loop (HITL)** intervention in the tool execution process.

#### Here's a step-by-step explanation of how it works

1. **Tool Call Reception:** The Tool Node receives input from an LLM Node. If the LLM's output contains the `tool_calls` property, the Tool Node will proceed with tool execution.
2. **Execution:** The Tool Node directly passes the LLM's `tool_calls` (which include the tool name and any required parameters) to the specified external tool. Otherwise, the Tool Node does not execute any tools in that particular workflow execution. It does not process or interpret the LLM's output in any way.
3. **Human-in-the-Loop (HITL):** The Tool Node allows for optional HITL, enabling human review and approval or rejection of tool execution before it occurs.
4. **Output passing:** After the tool execution (either automatic or after HITL approval), the Tool Node receives the tool's output and passes it to the next node in the workflow. If the Tool Node's output is not connected to a subsequent node, the tool's output is returned to the original LLM Node for further processing.

### Inputs

<table><thead><tr><th width="164"></th><th width="107">Required</th><th>Description</th></tr></thead><tbody><tr><td>LLM Node</td><td><strong>Yes</strong></td><td>Receives the output from an LLM Node, which may or may not contain <code>tool_calls</code> property. If it is present, the Tool Node will use them to execute the specified tool.</td></tr><tr><td>External Tools</td><td>No</td><td>Provides the Tool Node with <strong>access to a suite of external tools</strong>, enabling it to perform actions and retrieve information.</td></tr></tbody></table>

### Node Setup

<table><thead><tr><th width="183"></th><th width="101">Required</th><th>Description</th></tr></thead><tbody><tr><td>Tool Node Name</td><td><strong>Yes</strong></td><td>Add a descriptive name to the Tool Node to enhance workflow readability.</td></tr><tr><td>Require Approval (HITL)</td><td>No</td><td><strong>Activates the Human-in-the-loop (HITL) feature</strong>. If set to '<strong>True</strong>,' the Tool Node will request human approval before executing any tool. This is particularly valuable for sensitive operations or when human oversight is desired. Defaults to '<strong>False</strong>,' allowing the Tool Node to execute tools autonomously.</td></tr></tbody></table>

### Outputs

The Tool Node can connect to the following nodes as outputs:

* **Agent Node:** Passes the Tool Node's output (the result of the executed tool) to an Agent Node. The Agent Node can then use this information to decide on actions, execute further tools, or guide the conversation flow.
* **LLM Node:** Passes the output to a subsequent LLM Node. This enables the integration of tool results into the LLM's processing, allowing for further analysis or refinement of the conversation flow based on the tool's output.
* **Condition Agent Node:** Directs the flow to a Condition tool Node. This node evaluates the Tool Node's output and its predefined conditions to determine the appropriate next step in the workflow.
* **Condition Node:** Similar to the Condition Agent Node, the Condition Node uses predefined conditions to assess the Tool Node's output, directing the flow along different branches based on the outcome.
* **End Node:** Concludes the conversation flow.
* **Loop Node:** Redirects the flow back to a previous node, enabling iterative or cyclical processes within the workflow. This could be used for tasks that require multiple tool executions or involve refining the conversation based on tool results.

### Additional Parameters

<table><thead><tr><th width="200"></th><th width="102">Required</th><th>Description</th></tr></thead><tbody><tr><td>Approval Prompt</td><td>No</td><td><strong>A customizable prompt presented to the human reviewer when the HITL feature is active</strong>. This prompt provides context about the tool execution, including the tool's name and purpose. The variable <code>{tools}</code> within the prompt will be dynamically replaced with the actual list of tools suggested by the LLM Node, ensuring the human reviewer has all necessary information to make an informed decision.</td></tr><tr><td>Approve Button Text</td><td>No</td><td>Customizes <strong>the text displayed on the button for approving tool execution</strong> in the HITL interface. This allows for tailoring the language to the specific context and ensuring clarity for the human reviewer.</td></tr><tr><td>Reject Button Text</td><td>No</td><td>Customizes the <strong>text displayed on the button for rejecting tool execution</strong> in the HITL interface. Like the Approve Button Text, this customization enhances clarity and provides a clear action for the human reviewer to take if they deem the tool execution unnecessary or potentially harmful.</td></tr><tr><td>Update State</td><td>No</td><td>Provides a <strong>mechanism to modify the custom State object within the workflow</strong>. This is useful for storing information gathered by the Tool Node (after the tool execution) or influencing the behavior of subsequent nodes.</td></tr></tbody></table>

### Best Practices

{% tabs %}
{% tab title="Pro Tips" %}
**Strategic HITL placement**

Consider which tools require human oversight (HITL) and enable the "Require Approval" option accordingly.

**Informative Approval Prompts**

When using HITL, design clear and informative prompts for human reviewers. Provide sufficient context from the conversation and summarize the tool's intended action.
{% endtab %}

{% tab title="Potential Pitfalls" %}
**Unhandled tool output formats**

* **Problem:** The Tool Node outputs data in a format that is not expected or handled by subsequent nodes in the workflow, leading to errors or incorrect processing.
* **Example:** A Tool Node retrieves data from an API in JSON format, but the following LLM Node expects text input, causing a parsing error.
* **Solution:** Ensure that the output format of the external tool is compatible with the input requirements of the nodes connected to the Tool Node's output.
  {% endtab %}
  {% endtabs %}

***

## 7. Condition Node

The Condition Node acts as a **decision-making point in Sequential Agent workflows**, evaluating a set of predefined conditions to determine the flow's next path.

<figure><img src="/files/g1JqGc3y3M4LNsXGp5AI" alt="" width="299"><figcaption></figcaption></figure>

### Understanding the Condition Node

The Condition Node is essential for building workflows that adapt to different situations and user inputs. It examines the current State of the conversation, which includes all messages exchanged and any custom State variables previously defined. Then, based on the evaluation of the conditions specified in the node setup, the Condition Node directs the flow to one of its outputs.

For instance, after an Agent or LLM Node provides a response, a Condition Node could check if the response contains a specific keyword or if a certain condition is met in the custom State. If it does, the flow might be directed to an Agent Node for further action. If not, it could lead to a different path, perhaps ending the conversation or prompting the user with additional questions.

This enables us to **create branches in our workflow**, where the path taken depends on the data flowing through the system.

#### Here's a step-by-step explanation of how it works

1. The Condition Node receives input from any preceding node: Start Node, Agent Node, LLM Node, or Tool Node.
2. It has access to the full conversation history and the custom State (if any), giving it plenty of context to work with.
3. We define a condition that the node will evaluate. This could be checking for keywords, comparing values in the state, or any other logic we could implement via JavaScript.
4. Based on whether the condition evaluates to **true** or **false**, the Condition Node sends the flow down one of its predefined output paths. This creates a "fork in the road" or branch for our workflow.

### How to set up conditions

The Condition Node allows us to define dynamic branching logic in our workflow by choosing either a **table-based interface** or a **JavaScript code editor** to define the conditions that will control the conversation flow.

<figure><img src="/files/JJoWawOuludGjmspaoqR" alt=""><figcaption></figcaption></figure>

<details>

<summary>Conditions using CODE</summary>

The **Condition Node uses JavaScript** to evaluate specific conditions within the conversation flow.

We can set up conditions based on keywords, State changes, or other factors to dynamically guide the workflow to different branches based on the context of the conversation. Here are some examples:

**Keyword condition**

This checks if a specific word or phrase exists in the conversation history.

* **Example:** We want to check if the user said "yes" in their last message.

{% code overflow="wrap" %}

```javascript
const lastMessage = $flow.state.messages[$flow.state.messages.length - 1].content; 
return lastMessage.includes("yes") ? "Output 1" : "Output 2";
```

{% endcode %}

1. This code gets the last message from state.messages and checks if it contains "yes".
2. If "yes" is found, the flow goes to "Output 1"; otherwise, it goes to "Output 2".

**State change condition**

This checks if a specific value in the custom State has changed to a desired value.

* **Example:** We're tracking an orderStatus variable our custom State, and we want to check if it has become "confirmed".

{% code overflow="wrap" %}

```javascript
return $flow.state.orderStatus === "confirmed" ? "Output 1" : "Output 2";
```

{% endcode %}

1. This code directly compares the orderStatus value in our custom State to "confirmed".
2. If it matches, the flow goes to "Output 1"; otherwise, it goes to "Output 2".

</details>

<details>

<summary>Conditions using TABLE</summary>

The Condition Node allows us to define conditions using a **user-friendly table interface**, making it easy to create dynamic workflows without writing JavaScript code.

You can set up conditions based on keywords, State changes, or other factors to guide the conversation flow along different branches. Here are some examples:

**Keyword condition**

This checks if a specific word or phrase exists in the conversation history.

* **Example:** We want to check if the user said "yes" in their last message.
* **Setup**

  <table data-header-hidden><thead><tr><th width="294"></th><th width="116"></th><th width="99"></th><th></th></tr></thead><tbody><tr><td><strong>Variable</strong></td><td><strong>Operation</strong></td><td><strong>Value</strong></td><td><strong>Output Name</strong></td></tr><tr><td>$flow.state.messages[-1].content</td><td>Is</td><td>Yes</td><td>Output 1</td></tr></tbody></table>

  1. This table entry checks if the content (.content) of the last message (\[-1]) in `state.messages` is equal to "Yes".
  2. If the condition is met, the flow goes to "Output 1". Otherwise, the workflow is directed to a default "End" output.

**State change condition**

This checks if a specific value in our custom State has changed to a desired value.

* **Example:** We're tracking an orderStatus variable in our custom State, and we want to check if it has become "confirmed".
* **Setup**

  <table data-header-hidden><thead><tr><th width="266"></th><th width="113"></th><th></th><th></th></tr></thead><tbody><tr><td><strong>Variable</strong></td><td><strong>Operation</strong></td><td><strong>Value</strong></td><td><strong>Output Name</strong></td></tr><tr><td>$flow.state.orderStatus</td><td>Is</td><td>Confirmed</td><td>Output 1</td></tr></tbody></table>

  1. This table entry checks if the value of orderStatus in the custom State is equal to "confirmed".
  2. If the condition is met, the flow goes to "Output 1". Otherwise, the workflow is directed to a default "End" output.

</details>

### Defining conditions using the table interface

This visual approach allows you to easily set up rules that determine the path of your conversational flow, based on factors like user input, the current state of the conversation, or the results of actions taken by other nodes.

<details>

<summary>Table-Based: Condition Node</summary>

* **Updated on 09/08/2024**

  <table><thead><tr><th width="134"></th><th width="189">Description</th><th>Options/Syntax</th></tr></thead><tbody><tr><td><strong>Variable</strong></td><td>The variable or data element to evaluate in the condition.</td><td>- <code>$flow.state.messages.length</code> (Total Messages)<br>- <code>$flow.state.messages[0].con</code> (First Message Content)<br>- <code>$flow.state.messages[-1].con</code> (Last Message Content)<br>- <code>$vars.&#x3C;variable-name></code> (Global variable)</td></tr><tr><td><strong>Operation</strong></td><td>The comparison or logical operation to perform on the variable.</td><td>- Contains<br>- Not Contains<br>- Start With<br>- End With<br>- Is<br>- Is Not<br>- Is Empty<br>- Is Not Empty<br>- Greater Than<br>- Less Than<br>- Equal To<br>- Not Equal To<br>- Greater Than or Equal To<br>- Less Than or Equal To</td></tr><tr><td><strong>Value</strong></td><td>The value to compare the variable against.</td><td>- Depends on the data type of the variable and the selected operation.<br>- Examples: "yes", 10, "Hello"</td></tr><tr><td><strong>Output Name</strong></td><td>The name of the output path to follow if the condition evaluates to <code>true</code>.</td><td>- User-defined name (e.g., "Agent1", "End", "Loop")</td></tr></tbody></table>

</details>

### Inputs

<table><thead><tr><th width="167"></th><th width="118">Required</th><th>Description</th></tr></thead><tbody><tr><td>Start Node</td><td><strong>Yes</strong></td><td>Receives the State from the Start Node. This allows the Condition Node to <strong>evaluate conditions based on the initial context of the conversation</strong>, including any custom State.</td></tr><tr><td>Agent Node</td><td><strong>Yes</strong></td><td>Receives the Agent Node's output. This enables the Condition Node to <strong>make decisions based on the agent's actions</strong> and the conversation history, including any custom State.</td></tr><tr><td>LLM Node</td><td><strong>Yes</strong></td><td>Receives the LLM Node's output. This allows the Condition Node to <strong>evaluate conditions based on the LLM's response</strong> and the conversation history, including any custom State.</td></tr><tr><td>Tool Node</td><td><strong>Yes</strong></td><td>Receives the Tool Node's output. This enables the Condition Node to <strong>make decisions based on the results of tool execution</strong> and the conversation history, including any custom State.</td></tr></tbody></table>

{% hint style="info" %}
The **Condition Node requires at least one connection from the following nodes**: Start Node, Agent Node, LLM Node, or Tool Node.
{% endhint %}

### Outputs

The Condition Node **dynamically determines its output path based on the predefined conditions**, using either the table-based interface or JavaScript. This provides flexibility in directing the workflow based on condition evaluations.

#### Condition evaluation logic

* **Table-Based conditions:** The conditions in the table are evaluated sequentially, from top to bottom. The first condition that evaluates to true triggers its corresponding output. If none of the predefined conditions are met, the workflow is directed to the default "End" output.
* **Code-Based conditions:** When using JavaScript, we must explicitly return the name of the desired output path, including a name for the default "End" output.
* **Single output path:** Only one output path is activated at a time. Even if multiple conditions could be true, only the first matching condition determines the flow.

#### Connecting outputs

Each predefined output, including the default "End" output, can be connected to any of the following nodes:

* **Agent Node:** To continue the conversation with an agent, potentially taking actions based on the condition's outcome.
* **LLM Node:** To process the current State and conversation history with an LLM, generating responses or making further decisions.
* **End Node:** To terminate the conversation flow. If any output, including the default "End" output, is connected to an End Node, the Condition Node will output the last response from the preceding node and end the workflow.
* **Loop Node:** To redirect the flow back to a previous sequential node, enabling iterative processes based on the condition's outcome.

### Node Setup

<table><thead><tr><th width="178"></th><th width="110">Required</th><th>Description</th></tr></thead><tbody><tr><td>Condition Node Name</td><td>No</td><td>An optional, <strong>human-readable name</strong> for the condition being evaluated. This is helpful for understanding the workflow at a glance.</td></tr><tr><td>Condition</td><td><strong>Yes</strong></td><td>This is where we <strong>define the logic that will be evaluated to determine the output paths</strong>.</td></tr></tbody></table>

### Best Practices

{% tabs %}
{% tab title="Pro Tips" %}
**Clear condition naming**

Use descriptive names for your conditions (e.g., "If user is under 18, then Policy Advisor Agent", "If order is confirmed, then End Node") to make your workflow easier to understand and debug.

**Prioritize simple conditions**

Start with simple conditions and gradually add complexity as needed. This makes your workflow more manageable and reduces the risk of errors.
{% endtab %}

{% tab title="Potential Pitfalls" %}
**Mismatched condition logic and workflow design**

* **Problem:** The conditions you define in the Condition Node do not accurately reflect the intended logic of your workflow, leading to unexpected branching or incorrect execution paths.
* **Example:** You set up a condition to check if the user's age is greater than 18, but the output path for that condition leads to a section designed for users under 18.
* **Solution:** Review your conditions and ensure that the output paths associated with each condition match the intended workflow logic. Use clear and descriptive names for your outputs to avoid confusion.

**Insufficient State management**

* **Problem:** The Condition Node relies on a custom state variable that is not updated correctly, leading to inaccurate condition evaluations and incorrect branching.
* **Example:** You're tracking a "userLocation" variable in the custom State, but the variable is not updated when the user provides their location. The Condition Node evaluates the condition based on the outdated value, leading to an incorrect path.
* **Solution:** Ensure that any custom state variables used in your conditions are updated correctly throughout the workflow.
  {% endtab %}
  {% endtabs %}

***

## 8. Condition Agent Node

The Condition Agent Node provides **dynamic and intelligent routing within Sequential Agent flows**. It combines the capabilities of the **LLM Node** (LLM and JSON Structured Output) and the **Condition Node** (user-defined conditions), allowing us to leverage agent-based reasoning and conditional logic within a single node.

<figure><img src="/files/OslbGdIdaYaDfwK6zUHW" alt="" width="299"><figcaption></figcaption></figure>

### Key functionalities

* **Unified agent-based routing:** Combines agent reasoning, structured output, and conditional logic in a single node, simplifying workflow design.
* **Contextual awareness:** The agent considers the entire conversation history and any custom State when evaluating conditions.
* **Flexibility:** Provides both table-based and code-based options for defining conditions, when catering to different user preferences and skill levels.

### Setting up the Condition Agent Node

The Condition Agent Node acts as a specialized agent that can both **process information and make routing decisions**. Here's how to configure it:

1. **Define the agent's persona**
   * In the "System Prompt" field, provide a clear and concise description of the agent's role and the task it needs to perform for conditional routing. This prompt will guide the agent's understanding of the conversation and its decision-making process.
2. **Structure the Agent's Output (Optional)**
   * If you want the agent to produce structured output, use the "JSON Structured Output" feature. Define the desired schema for the output, specifying the keys, data types, and any enum values. This structured output will be used by the agent when evaluating conditions.
3. **Define conditions**
   * Choose either the table-based interface or the JavaScript code editor to define the conditions that will determine the routing behavior.
     * **Table-Based interface:** Add rows to the table, specifying the variable to check, the comparison operation, the value to compare against, and the output name to follow if the condition is met.
     * **JavaScript code:** Write custom JavaScript snippets to evaluate conditions. Use the `return` statement to specify the name of the output path to follow based on the condition's result.
4. **Connect outputs**
   * Connect each predefined output, including the default "End" output, to the appropriate subsequent node in the workflow. This could be an Agent Node, LLM Node, Loop Node, or an End Node.

### How to set up conditions

The Condition Agent Node allows us to define dynamic branching logic in our workflow by choose either a **table-based interface** or a **JavaScript code editor** to define the conditions that will control the conversation flow.

<figure><img src="/files/JJoWawOuludGjmspaoqR" alt=""><figcaption></figcaption></figure>

<details>

<summary>Conditions using CODE</summary>

The Condition Agent Node, like the Condition Node, **uses JavaScript code to evaluate specific conditions** within the conversation flow.

However, the Condition Agent Node can evaluate conditions based on a wider range of factors, including keywords, state changes, and the content of its own output (either as free-form text or structured JSON data). This allows for more nuanced and context-aware routing decisions. Here are some examples:

**Keyword condition**

This checks if a specific word or phrase exists in the conversation history.

* **Example:** We want to check if the user said "yes" in their last message.

{% code overflow="wrap" %}

```javascript
const lastMessage = $flow.state.messages[$flow.state.messages.length - 1].content; 
return lastMessage.includes("yes") ? "Output 1" : "Output 2";
```

{% endcode %}

1. This code gets the last message from state.messages and checks if it contains "yes".
2. If "yes" is found, the flow goes to "Output 1"; otherwise, it goes to "Output 2".

**State change condition**

This checks if a specific value in the custom State has changed to a desired value.

* **Example:** We're tracking an orderStatus variable our custom State, and we want to check if it has become "confirmed".

{% code overflow="wrap" %}

```javascript
return $flow.state.orderStatus === "confirmed" ? "Output 1" : "Output 2";
```

{% endcode %}

1. This code directly compares the orderStatus value in our custom State to "confirmed".
2. If it matches, the flow goes to "Output 1"; otherwise, it goes to "Output 2".

</details>

<details>

<summary>Conditions using TABLE</summary>

The Condition Agent Node also provides a **user-friendly table interface for defining conditions**, similar to the Condition Node. You can set up conditions based on keywords, state changes, or the agent's own output, allowing you to create dynamic workflows without writing JavaScript code.

This table-based approach simplifies condition management and makes it easier to visualize the branching logic. Here are some examples:

**Keyword condition**

This checks if a specific word or phrase exists in the conversation history.

* **Example:** We want to check if the user said "yes" in their last message.
* **Setup**

  <table data-header-hidden><thead><tr><th width="305"></th><th width="116"></th><th width="99"></th><th></th></tr></thead><tbody><tr><td><strong>Variable</strong></td><td><strong>Operation</strong></td><td><strong>Value</strong></td><td><strong>Output Name</strong></td></tr><tr><td>$flow.state.messages[-1].content</td><td>Is</td><td>Yes</td><td>Output 1</td></tr></tbody></table>

  1. This table entry checks if the content (.content) of the last message (\[-1]) in `state.messages` is equal to "Yes".
  2. If the condition is met, the flow goes to "Output 1". Otherwise, the workflow is directed to a default "End" output.

**State change condition**

This checks if a specific value in our custom State has changed to a desired value.

* **Example:** We're tracking an orderStatus variable in our custom State, and we want to check if it has become "confirmed".
* **Setup**

  <table data-header-hidden><thead><tr><th width="266"></th><th width="113"></th><th></th><th></th></tr></thead><tbody><tr><td><strong>Variable</strong></td><td><strong>Operation</strong></td><td><strong>Value</strong></td><td><strong>Output Name</strong></td></tr><tr><td>$flow.state.orderStatus</td><td>Is</td><td>Confirmed</td><td>Output 1</td></tr></tbody></table>

  1. This table entry checks if the value of orderStatus in the custom State is equal to "confirmed".
  2. If the condition is met, the flow goes to "Output 1". Otherwise, the workflow is directed to a default "End" output.

</details>

### Defining conditions using the table interface

This visual approach allows you to easily set up rules that determine the path of your conversational flow, based on factors like user input, the current state of the conversation, or the results of actions taken by other nodes.

<details>

<summary>Table-Based: Condition Agent Node</summary>

* **Updated on 09/08/2024**

  <table><thead><tr><th width="125"></th><th width="186">Description</th><th>Options/Syntax</th></tr></thead><tbody><tr><td><strong>Variable</strong></td><td>The variable or data element to evaluate in the condition. This can include data from the agent's output.</td><td>- <code>$flow.output.content</code> (Agent Output - string)<br>- <code>$flow.output.&#x3C;replace-with-key></code> (Agent's JSON Key Output - string/number)<br>- <code>$flow.state.messages.length</code> (Total Messages)<br>- <code>$flow.state.messages[0].con</code> (First Message Content)<br>- <code>$flow.state.messages[-1].con</code> (Last Message Content)<br>- <code>$vars.&#x3C;variable-name></code> (Global variable)</td></tr><tr><td><strong>Operation</strong></td><td>The comparison or logical operation to perform on the variable.</td><td>- Contains<br>- Not Contains<br>- Start With<br>- End With<br>- Is<br>- Is Not<br>- Is Empty<br>- Is Not Empty<br>- Greater Than<br>- Less Than<br>- Equal To<br>- Not Equal To<br>- Greater Than or Equal To<br>- Less Than or Equal To</td></tr><tr><td><strong>Value</strong></td><td>The value to compare the variable against.</td><td>- Depends on the data type of the variable and the selected operation.<br>- Examples: "yes", 10, "Hello"</td></tr><tr><td><strong>Output Name</strong></td><td>The name of the output path to follow if the condition evaluates to <code>true</code>.</td><td>- User-defined name (e.g., "Agent1", "End", "Loop")</td></tr></tbody></table>

</details>

### Inputs

<table><thead><tr><th width="167"></th><th width="118">Required</th><th>Description</th></tr></thead><tbody><tr><td>Start Node</td><td>Yes</td><td>Receives the State from the Start Node. This allows the Condition Agent Node to <strong>evaluate conditions based on the initial context</strong> of the conversation, including any custom State.</td></tr><tr><td>Agent Node</td><td>Yes</td><td>Receives the Agent Node's output. This enables the Condition Agent Node to <strong>make decisions based on the agent's actions</strong> and the conversation history, including any custom State.</td></tr><tr><td>LLM Node</td><td>Yes</td><td>Receives LLM Node's output. This allows the Condition Agent Node to <strong>evaluate conditions based on the LLM's response</strong> and the conversation history, including any custom State.</td></tr><tr><td>Tool Node</td><td>Yes</td><td>Receives the Tool Node's output. This enables the Condition Agent Node to <strong>make decisions based on the results of tool execution</strong> and the conversation history, including any custom State.</td></tr></tbody></table>

{% hint style="info" %}
The **Condition Agent Node requires at least one connection from the following nodes**: Start Node, Agent Node, LLM Node, or Tool Node.
{% endhint %}

### Node Setup

<table><thead><tr><th width="178">Parameter</th><th width="110">Required</th><th>Description</th></tr></thead><tbody><tr><td>Name</td><td>No</td><td>Add a descriptive name to the Condition Agent Node to enhance workflow readability and easily.</td></tr><tr><td>Condition</td><td><strong>Yes</strong></td><td>This is where we <strong>define the logic that will be evaluated to determine the output paths</strong>.</td></tr></tbody></table>

### Outputs

The Condition Agent Node, like the Condition Node, **dynamically determines its output path based on the conditions defined**, using either the table-based interface or JavaScript. This provides flexibility in directing the workflow based on condition evaluations.

#### Condition evaluation logic

* **Table-Based conditions:** The conditions in the table are evaluated sequentially, from top to bottom. The first condition that evaluates to true triggers its corresponding output. If none of the predefined conditions are met, the workflow is directed to the default "End" output.
* **Code-Based conditions:** When using JavaScript, we must explicitly return the name of the desired output path, including a name for the default "End" output.
* **Single output path:** Only one output path is activated at a time. Even if multiple conditions could be true, only the first matching condition determines the flow.

#### Connecting outputs

Each predefined output, including the default "End" output, can be connected to any of the following nodes:

* **Agent Node:** To continue the conversation with an agent, potentially taking actions based on the condition's outcome.
* **LLM Node:** To process the current State and conversation history with an LLM, generating responses or making further decisions.
* **End Node:** To terminate the conversation flow. If the default "End" output is connected to an End Node, the Condition Node will output the last response from the preceding node and end the conversation.
* **Loop Node:** To redirect the flow back to a previous sequential node, enabling iterative processes based on the condition's outcome.

#### Key differences from the Condition Node

* The Condition **Agent Node incorporates an agent's reasoning** and structured output into the condition evaluation process.
* It provides a more integrated approach to agent-based condition routing.

### Additional Parameters

<table><thead><tr><th width="180"></th><th width="111">Required</th><th>Description</th></tr></thead><tbody><tr><td>System Prompt</td><td>No</td><td><strong>Defines the Condition Agent's 'persona' and guides its behavior for making routing decisions.</strong> For example: "You are a customer service agent specializing in technical support. Your goal is to help customers with technical issues related to our product. Based on the user's query, identify the specific technical issue (e.g., connectivity problems, software bugs, hardware malfunctions)."</td></tr><tr><td>Human Prompt</td><td>No</td><td>This prompt is appended to the <code>state.messages</code> array as a human message. It allows us to <strong>inject a human-like message into the conversation flow</strong> after the Condition Agent Node has processed its input and before the next node receives the Condition Agent Node's output.</td></tr><tr><td>JSON Structured Output</td><td>No</td><td>To instruct the Condition Agent Node to <strong>provide the output in JSON structure schema</strong> (Key, Type, Enum Values, Description).</td></tr></tbody></table>

### Best Practices

{% tabs %}
{% tab title="Pro Tips" %}
**Craft a clear and focused system prompt**

Provide a well-defined persona and clear instructions to the agent in the System Prompt. This will guide its reasoning and help it generate relevant output for the conditional logic.

**Structure output for reliable conditions**

Use the JSON Structured Output feature to define a schema for the Condition Agent's output. This will ensure that the output is consistent and easily parsable, making it more reliable for use in conditional evaluations.
{% endtab %}

{% tab title="Potential Pitfalls" %}
**Unreliable routing due to unstructured output**

* **Problem:** The Condition Agent Node is not configured to output structured JSON data, leading to unpredictable output formats that can make it difficult to define reliable conditions.
* **Example:** The Condition Agent Node is asked to determine user sentiment (positive, negative, neutral) but outputs its assessment as a free-form text string. The variability in the agent's language makes it challenging to create accurate conditions in the conditional table or code.
* **Solution:** Use the JSON Structured Output feature to define a schema for the agent's output. For example, specify a "sentiment" key with an enum of "positive," "negative," and "neutral." This will ensure that the agent's output is consistently structured, making it much easier to create reliable conditions.
  {% endtab %}
  {% endtabs %}

***

## 9. Loop Node

The Loop Node allows us to create loops within our conversational flow, **redirecting the conversation back to a specific point**. This is useful for scenarios where we need to repeat a certain sequence of actions or questions based on user input or specific conditions.

<figure><img src="/files/rVTZdTIarjfLktwOPLYs" alt="" width="335"><figcaption></figcaption></figure>

### Understanding the Loop Node

The Loop Node acts as a connector, redirecting the flow back to a specific point in the graph, allowing us to create loops within our conversational flow. **It passes the current State, which includes the output of the node preceding the Loop Node to our target node.** This data transfer allows our target node to process information from the previous iteration of the loop and adjust its behavior accordingly.

For instance, let's say we're building a chatbot that helps users book flights. We might use a loop to iteratively refine the search criteria based on user feedback.

#### Here's how the Loop Node could be used

1. **LLM Node (Initial Search):** The LLM Node receives the user's initial flight request (e.g., "Find flights from Madrid to New York in July"). It queries a flight search API and returns a list of possible flights.
2. **Agent Node (Present Options):** The Agent Node presents the flight options to the user and asks if they would like to refine their search (e.g., "Would you like to filter by price, airline, or departure time?").
3. **Condition Agent Node:** The Condition Agent Node checks the user's response and has two outputs:
   * **If the user wants to refine:** The flow goes to the "Refine Search" LLM Node.
   * **If the user is happy with the results:** The flow proceeds to the booking process.
4. **LLM Node (Refine Search):** This LLM Node gathers the user's refinement criteria (e.g., "Show me only flights under $500") and updates the State with the new search parameters.
5. **Loop Node:** The Loop Node redirects the flow back to the initial LLM Node ("Initial Search"). It passes the updated State, which now includes the refined search criteria.
6. **Iteration:** The initial LLM Node performs a new search using the refined criteria, and the process repeats from step 2.

**In this example, the Loop Node enables an iterative search refinement process.** The system can continue to loop back and refine the search results until the user is satisfied with the options presented.

### Inputs

<table><thead><tr><th width="197"></th><th width="104">Required</th><th>Description</th></tr></thead><tbody><tr><td>Agent Node</td><td><strong>Yes</strong></td><td>Receives the output of a preceding Agent Node. This data is then sent back to the target node specified in the "Loop To" parameter.</td></tr><tr><td>LLM Node</td><td><strong>Yes</strong></td><td>Receives the output of a preceding LLM Node. This data is then sent back to the target node specified in the "Loop To" parameter.</td></tr><tr><td>Tool Node</td><td><strong>Yes</strong></td><td>Receives the output of a preceding Tool Node. This data is then sent back to the target node specified in the "Loop To" parameter.</td></tr><tr><td>Condition Node</td><td><strong>Yes</strong></td><td>Receives the output of a preceding Condition Node. This data is then sent back to the target node specified in the "Loop To" parameter.</td></tr><tr><td>Condition Agent Node</td><td><strong>Yes</strong></td><td>Receives the output of a preceding Condition Agent Node. This data is then sent back to the target node specified in the "Loop To" parameter.</td></tr></tbody></table>

{% hint style="info" %}
The **Loop Node requires at least one connection from the following nodes**: Agent Node, LLM Node, Tool Node, Condition Node, or Condition Agent Node.
{% endhint %}

### Node Setup

<table><thead><tr><th width="125"></th><th width="109">Required</th><th>Description</th></tr></thead><tbody><tr><td>Loop To</td><td><strong>Yes</strong></td><td>The Loop Node requires us to <strong>specify the target node</strong> ("Loop To") where the conversational flow should be redirected. This target node must be an <strong>Agent Node</strong> or <strong>LLM Node</strong>.</td></tr></tbody></table>

### Outputs

The **Loop Node does not have any direct output connections**. It redirects the flow back to the specific sequential node in the graph.

### Best Practices

{% tabs %}
{% tab title="Pro Tips" %}
**Clear loop purpose**

Define a clear purpose for each loop in your workflow. If possible, document with a sticky note what you're trying to achieve with the loop.
{% endtab %}

{% tab title="Potencial Pitfalls" %}
**Confusing workflow structure**

* **Problem:** Excessive or poorly designed loops make the workflow difficult to understand and maintain.
* **Example:** You use multiple nested loops without clear purpose or labels, making it hard to follow the flow of the conversation.
* **Solution:** Use loops sparingly and only when necessary. Clearly document your Loop Nodes and the nodes they connect to.

**Infinite loops due to missing or incorrect exit conditions**

* **Problem:** The loop never terminates because the condition that should trigger the loop's exit is either missing or incorrectly defined.
* **Example:** A Loop Node is used to iteratively gather user information. However, the workflow lacks a Conditional Agent Node to check if all required information has been collected. As a result, the loop continues indefinitely, repeatedly asking the user for the same information.
* **Solution:** Always define clear and accurate exit conditions for loops. Use Condition Nodes to check state variables, user input, or other factors that indicate when the loop should terminate.
  {% endtab %}
  {% endtabs %}

***

## 10. End Node

The End Node marks the definitive **termination point of the conversation** in a Sequential Agent workflow. It signifies that no further processing, actions, or interactions are required.

<figure><img src="/files/wiROoGUBMANuNlWYEgtJ" alt="" width="375"><figcaption></figcaption></figure>

### Understanding the End Node

The End Node serves as a signal within Flowise's Sequential Agent architecture, **indicating that the conversation has reached its intended conclusion**. Upon reaching the End Node, the system "understands" that the conversational objective has been met, and no further actions or interactions are required within the flow.

### Inputs

<table><thead><tr><th width="212"></th><th width="103">Required</th><th>Description</th></tr></thead><tbody><tr><td>Agent Node</td><td><strong>Yes</strong></td><td>Receives the final output from a preceding Agent Node, indicating the end of the agent's processing.</td></tr><tr><td>LLM Node</td><td><strong>Yes</strong></td><td>Receives the final output from a preceding LLM Node, indicating the end of the LLM Node's processing.</td></tr><tr><td>Tool Node</td><td><strong>Yes</strong></td><td>Receives the final output from a preceding Tool Node, indicating the completion of the Tool Node's execution.</td></tr><tr><td>Condition Node</td><td><strong>Yes</strong></td><td>Receives the final output from a preceding Condition Node, indicating the end of the Condition Node's execution.</td></tr><tr><td>Condition Agent Node</td><td><strong>Yes</strong></td><td>Receives the final output from a preceding Condition Node, indicating the completion of the Condition Agent Node's processing.</td></tr></tbody></table>

{% hint style="info" %}
The **End Node requires at least one connection from the following nodes**: Agent Node, LLM Node, or Tool Node.
{% endhint %}

### Outputs

The **End Node does not have any output** connections as it signifies the termination of the information flow.

### Best Practices

{% tabs %}
{% tab title="Pro Tips" %}
**Provide a final response**

If appropriate, connect the End Node to an dedicated LLM or Agent Node to generate a final message or summary for the user, providing closure to the conversation.
{% endtab %}

{% tab title="Potencial Pitfalls" %}
**Premature conversation termination**

* **Problem:** The End Node is placed too early in the workflow, causing the conversation to end before all necessary steps are completed or the user's request is fully addressed.
* **Example:** A chatbot designed to collect user feedback ends the conversation after the user provides their first comment, without giving them an opportunity to provide additional feedback or ask questions.
* **Solution:** Review your workflow logic and ensure that the End Node is placed only after all essential steps have been completed or the user has explicitly indicated their intent to end the conversation.

**Lack of closure for the user**

* **Problem:** The conversation ends abruptly without a clear signal to the user or a final message that provides a sense of closure.
* **Example:** A customer support chatbot ends the conversation immediately after resolving an issue, without confirming the resolution with the user or offering further assistance.
* **Solution:** Connect the End Node to a dedicate LLM or Agent Node to generate a final response that summarizes the conversation, confirms any actions taken, and provides a sense of closure for the user.
  {% endtab %}
  {% endtabs %}

***

## Condition Node vs. Condition Agent Node

The Condition and Condition Agent Nodes are essential in Flowise's Sequential Agent architecture for creating dynamic conversational experiences.

These nodes enable adaptive workflows, responding to user input, context, and complex decisions, but differ in their approach to condition evaluation and sophistication.

<details>

<summary><strong>Condition Node</strong></summary>

**Purpose**

To create branches based on simple, predefined logical conditions.

**Condition evaluation**

Uses a table-based interface or JavaScript code editor to define conditions that are checked against the custom State and/or the full conversation history.

**Output behavior**

* Supports multiple output paths, each associated with a specific condition.
* Conditions are evaluated in order. The first matching condition determines the output.
* If no conditions are met, the flow follows a default "End" output.

**Best suited for**

* Straightforward routing decisions based on easily definable conditions.
* Workflows where the logic can be expressed using simple comparisons, keyword checks, or custom state variable values.

</details>

<details>

<summary><strong>Condition Agent Node</strong></summary>

**Purpose**

To create dynamic routing based on an agent's analysis of the conversation and its structured output.

**Condition evaluation**

* If no Chat Model is connected, it uses the default system LLM (from the Start Node) to process the conversation history and any custom State.
* It can generate structured output, which is then used for condition evaluation.
* Uses a table-based interface or JavaScript code editor to define conditions that are checked against the agent's own output, structured or not.

**Output behavior**

Same as the Condition Node:

* Supports multiple output paths, each associated with a specific condition.
* Conditions are evaluated in order. The first matching condition determines the output.
* If no conditions are met, the flow follows the default "End" output.

**Best suited for**

* More complex routing decisions that require an understanding of conversation context, user intent, or nuanced factors.
* Scenarios where simple logical conditions are insufficient to capture the desired routing logic.
* **Example:** A chatbot needs to determine if a user's question is related to a specific product category. A Condition Agent Node could be used to analyze the user's query and output a JSON object with a "category" field. The Condition Agent Node can then use this structured output to route the user to the appropriate product specialist.

</details>

### Summarizing

<table><thead><tr><th width="218"></th><th width="258">Condition Node</th><th>Condition Agent Node</th></tr></thead><tbody><tr><td><strong>Decision Logic</strong></td><td>Based on predefined logical conditions.</td><td>Based on agent's reasoning and structured output.</td></tr><tr><td><strong>Agent Involvement</strong></td><td>No agent involved in condition evaluation.</td><td>Uses an agent to process context and generate output for conditions.</td></tr><tr><td><strong>Structured Output</strong></td><td>Not possible.</td><td>Possible and encouraged for reliable condition evaluation.</td></tr><tr><td><strong>Condition Evaluation</strong></td><td>Only define conditions that are checked against the full conversation history.</td><td>Can define conditions that are checked against the agent's own output, structured or not.</td></tr><tr><td><strong>Complexity</strong></td><td>Suitable for simple branching logic.</td><td>Handles more nuanced and context-aware routing.</td></tr><tr><td><strong>Ideal Uses Cases</strong></td><td><ul><li>Routing based on user's age or a keyword in the conversation.</li></ul></td><td><ul><li>Routing based on user sentiment, intent, or complex contextual factors.</li></ul></td></tr></tbody></table>

### Choosing the right node

* **Condition Node:** Use the Condition Node when your routing logic involves straightforward decisions based on easily definable conditions. For instance, it's perfect for checking for specific keywords, comparing values in the State, or evaluating other simple logical expressions.
* **Condition Agent Node:** However, when your routing demands a deeper understanding of the conversation's nuances, the Condition Agent Node is the better choice. This node acts as your intelligent routing assistant, leveraging an LLM to analyze the conversation, make judgments based on context, and provide structured output that drives more sophisticated and dynamic routing.

***

## Agent Node vs. LLM Node

It's important to understand that both the **LLM Node and the Agent Node can be considered agentic entities within our system**, as they both leverage the capabilities of a large language model (LLM) or Chat Model.

However, while both nodes can process language and interact with tools, they are designed for different purposes within a workflow.

<details>

<summary>Agent Node</summary>

**Focus**

The primary focus of the Agent Node to simulate the actions and decision-making of a human agent within a conversational context.

It acts as a high-level coordinator within the workflow, bringing together language understanding, tool execution, and decision-making to create a more human-like conversational experience.

**Strengths**

* Effectively manages the execution of multiple tools and integrates their results.
* Offers built-in support for Human-in-the-Loop (HITL), enabling human review and approval for sensitive operations.

**Best Suited For**

* Workflows where the agent needs to guide the user, gather information, make choices, and manage the overall conversation flow.
* Scenarios requiring integration with multiple external tools.
* Tasks involving sensitive data or actions where human oversight is beneficial, like approving financial transaction

</details>

<details>

<summary>LLM Node</summary>

**Focus**

Similar to the Agent Node, but it provides more flexibility when using tools and Human-in-the-Loop (HITL), both via the Tool Node.

**Strengths**

* Enables the definition of JSON schemas to structure the LLM's output, making it easier to extract specific information.
* Offers flexibility in tool integration, allowing for more complex sequences of LLM and tool calls, and providing fine-grained control over the HITL feature.

**Best Suited For**

* Scenarios where structured data needs to be extracted from the LLM's response.
* Workflows requiring a mix of automated and human-reviewed tool executions. For example, an LLM Node might call a tool to retrieve product information (automated), and then a different tool to process a payment, which would require HITL approval.

</details>

### Summarizing

<table><thead><tr><th width="206"></th><th width="253">Agent Node</th><th>LLM Node</th></tr></thead><tbody><tr><td><strong>Tool Interaction</strong></td><td>Directly calls and manages multiple tools, built-in HITL.</td><td>Triggers tools via the Tool Node, granular HITL control at the tool level.</td></tr><tr><td><strong>Human-in-the-Loop (HITL)</strong></td><td>HITL controlled at the Agent Node level (all connected tools affected).</td><td>HITL managed at the individual Tool Node level (more flexibility).</td></tr><tr><td><strong>Structured Output</strong></td><td>Relies on the LLM's natural output format.</td><td>Relies on the LLM's natural output format, but, if needed, provides JSON schema definition to structure LLM output.</td></tr><tr><td><strong>Ideal Use Cases</strong></td><td><ul><li>Workflows with complex tool orchestration.</li><li>Simplified HITL at the Agent Level.</li></ul></td><td><ul><li>Extracting structured data from LLM output</li><li>Workflows with complex LLM and tool interactions, requiring mixed HITL levels.</li></ul></td></tr></tbody></table>

### Choosing the right node

* **Choose the Agent Node:** Use the Agent Node when you need to create a conversational system that can manage the execution of multiple tools, all of which share the same HITL setting (enabled or disabled for the entire Agent Node). The Agent Node is also well-suited for handling complex multi-step conversations where consistent agent-like behavior is desired.
* **Choose the LLM Node:** On the other hand, use the LLM Node when you need to extract structured data from the LLM's output using the JSON schema feature, a capability not available in the Agent Node. The LLM Node also excels at orchestrating tool execution with fine-grained control over HITL at the individual tool level, allowing you to mix automated and human-reviewed tool executions by using multiple Tool Nodes connected to the LLM Node.

[^1]: In our current context, a lower level of abstraction refers to a system that exposes a greater degree of implementation detail to the developer.


# Video Tutorials

Learn Sequential Agents from the Community

### Build a Multi-Stage RAG Agent

In this video, [Leon](https://youtube.com/@leonvanzyl) provides a step by step tutorial on creating an advanced RAG agent that incorporates routing, fallback and self-correction techniques.

{% embed url="<https://youtu.be/OejuvdyN_U8>" %}

### Master Sequential Agents: Build Complex AI Apps with Flowise

In this video, [Leon](https://youtube.com/@leonvanzyl) provides a **comprehensive introduction to the Sequential Agent** architecture and demonstrates how to manage custom state to build more dynamic applications.

{% embed url="<https://www.youtube.com/watch?v=6LbvgTbS0BE>" %}

### Sequential vs. Multi Agents: Which Flowise feature is right for you?

In this video, [Leon](https://youtube.com/@leonvanzyl) looks at two different solutions in Flowise for creating multi-agent projects. He compares the **differences between Sequential Agents and Multi Agents** by recreating the same projects using both techniques.

{% embed url="<https://www.youtube.com/watch?v=3ZmBq8_4vCs>" %}

### Build Production-Ready Apps in Minutes: **Flowise's Sequential** Agents and n8n

In this video, [Wntrmute AI](https://www.youtube.com/@WntrmuteAI) demonstrates how to quickly build a **production-ready application** in less than 30 minutes by combining **Flowise's Sequential Agents** and **n8n**.

{% embed url="<https://www.youtube.com/watch?v=DA_0eOTYnmc>" %}

### How to Build a Self-Improving AI with Agentic RAG and Flowise

In this video, [Leon](https://youtube.com/@leonvanzyl) will show you how to build a self-correcting RAG application using FlowiseAI's Sequential Agents. Agentic RAG is a powerful approach for creating AI solutions that can learn and improve their responses over time.

{% embed url="<https://www.youtube.com/watch?v=SL77Ojbgy6U>" %}


# Prediction

Prediction API is the primary endpoint for interacting with your Flowise flows and assistants. It allows you to send messages to your selected flow and receive responses back. This API handles the core chat functionality, including:

* **Chat Interactions**: Send questions or messages to your flow and receive AI-generated responses
* **Streaming Responses**: Get real-time streaming responses for better user experience
* **Conversation Memory**: Maintain context across multiple messages within a session
* **File Processing**: Upload and process images, audio, and other files as part of your queries
* **Dynamic Configuration**: Override chatflow settings and pass variables at runtime

For details, see the [Prediction Endpoint API Reference](/api-reference/prediction).

## Base URL and Authentication

**Base URL**: `http://localhost:3000` (or your Flowise instance URL)

**Endpoint**: `POST /api/v1/prediction/:id`

**Authentication**: Refer [Authentication for Flows](/configuration/authorization/chatflow-level)

## Request Format

#### Basic Request Structure

```json
{
    "question": "Your message here",
    "streaming": false,
    "overrideConfig": {},
    "history": [],
    "uploads": [],
    "form": {}
}
```

#### Parameters

| Parameter        | Type    | Required                    | Description                                 |
| ---------------- | ------- | --------------------------- | ------------------------------------------- |
| `question`       | string  | Yes                         | The message/question to send to the flow    |
| `form`           | object  | Either `question` or `form` | The form object to send to the flow         |
| `streaming`      | boolean | No                          | Enable streaming responses (default: false) |
| `overrideConfig` | object  | No                          | Override flow configuration                 |
| `history`        | array   | No                          | Previous conversation messages              |
| `uploads`        | array   | No                          | Files to upload (images, audio, etc.)       |
| `humanInput`     | object  | No                          | Return human feedback and resume execution  |

## SDK Libraries

Flowise provides official SDKs for Python and TypeScript/JavaScript:

#### Installation

**Python**: `pip install flowise`

**TypeScript/JavaScript**: `npm install flowise-sdk`

#### Python SDK Usage

{% tabs %}
{% tab title="Basic Usage" %}

```python
from flowise import Flowise, PredictionData

# Initialize client
client = Flowise(base_url="http://localhost:3000")

# Non-streaming prediction
try:
    response = client.create_prediction(
        PredictionData(
            chatflowId="your-chatflow-id",
            question="What is machine learning?",
            streaming=False
        )
    )
    
    # Handle response
    for result in response:
        print("Response:", result)
        
except Exception as e:
    print(f"Error: {e}")
```

{% endtab %}

{% tab title="Streaming" %}

```python
from flowise import Flowise, PredictionData

client = Flowise(base_url="http://localhost:3000")

# Streaming prediction
try:
    response = client.create_prediction(
        PredictionData(
            chatflowId="your-chatflow-id",
            question="Tell me a long story about AI",
            streaming=True
        )
    )
    
    # Process streaming chunks
    print("Streaming response:")
    for chunk in response:
        print(chunk, end="", flush=True)
        
except Exception as e:
    print(f"Error: {e}")
```

{% endtab %}

{% tab title="With Configuration" %}

```python
from flowise import Flowise, PredictionData

client = Flowise(base_url="http://localhost:3000")

# Advanced configuration
try:
    response = client.create_prediction(
        PredictionData(
            chatflowId="your-chatflow-id",
            question="Analyze this data",
            streaming=False,
            overrideConfig={
                "sessionId": "user-session-123",
                "temperature": 0.7,
                "maxTokens": 500,
                "returnSourceDocuments": True
            }
        )
    )
    
    for result in response:
        print("Response:", result)
        
except Exception as e:
    print(f"Error: {e}")
```

{% endtab %}
{% endtabs %}

#### TypeScript/JavaScript SDK Usage

{% tabs %}
{% tab title="Basic Usage" %}

```typescript
import { FlowiseClient } from 'flowise-sdk';

// Initialize client
const client = new FlowiseClient({ 
    baseUrl: 'http://localhost:3000' 
});

// Non-streaming prediction
async function chatWithFlow() {
    try {
        const response = await client.createPrediction({
            chatflowId: 'your-chatflow-id',
            question: 'What is machine learning?',
            streaming: false
        });
        
        console.log('Response:', response);
        
    } catch (error) {
        console.error('Error:', error);
    }
}

chatWithFlow();
```

{% endtab %}

{% tab title="Streaming" %}

```typescript
import { FlowiseClient } from 'flowise-sdk';

const client = new FlowiseClient({ 
    baseUrl: 'http://localhost:3000' 
});

// Streaming prediction
async function streamingChat() {
    try {
        const stream = await client.createPrediction({
            chatflowId: 'your-chatflow-id',
            question: 'Tell me a long story about AI',
            streaming: true
        });
        
        console.log('Streaming response:');
        for await (const chunk of stream) {
            process.stdout.write(chunk);
        }
        
    } catch (error) {
        console.error('Error:', error);
    }
}

streamingChat();
```

{% endtab %}

{% tab title="With Configuration" %}

```typescript
import { FlowiseClient } from 'flowise-sdk';

const client = new FlowiseClient({ 
    baseUrl: 'http://localhost:3000' 
});

// Advanced configuration
async function advancedChat() {
    try {
        const response = await client.createPrediction({
            chatflowId: 'your-chatflow-id',
            question: 'Analyze this data',
            streaming: false,
            overrideConfig: {
                sessionId: 'user-session-123',
                temperature: 0.7,
                maxTokens: 500,
                returnSourceDocuments: true
            }
        });
        
        console.log('Response:', response);
        
    } catch (error) {
        console.error('Error:', error);
    }
}

advancedChat();
```

{% endtab %}
{% endtabs %}

## Direct HTTP API Usage

If you prefer to use the REST API directly without SDKs:

#### Basic Request

{% tabs %}
{% tab title="Python (requests)" %}

```python
import requests
import json

def send_message(chatflow_id, question, streaming=False):
    url = f"http://localhost:3000/api/v1/prediction/{chatflow_id}"
    
    payload = {
        "question": question,
        "streaming": streaming
    }
    
    headers = {
        "Content-Type": "application/json"
    }
    
    try:
        response = requests.post(url, json=payload, headers=headers)
        response.raise_for_status()  # Raise exception for bad status codes
        
        return response.json()
        
    except requests.exceptions.RequestException as e:
        print(f"Request failed: {e}")
        return None

# Usage
result = send_message(
    chatflow_id="your-chatflow-id",
    question="What is artificial intelligence?",
    streaming=False
)

if result:
    print("Response:", result)
```

{% endtab %}

{% tab title="JavaScript (fetch)" %}

```javascript
async function sendMessage(chatflowId, question, streaming = false) {
    const url = `http://localhost:3000/api/v1/prediction/${chatflowId}`;
    
    const payload = {
        question: question,
        streaming: streaming
    };
    
    try {
        const response = await fetch(url, {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
            },
            body: JSON.stringify(payload)
        });
        
        if (!response.ok) {
            throw new Error(`HTTP error! status: ${response.status}`);
        }
        
        const result = await response.json();
        return result;
        
    } catch (error) {
        console.error('Request failed:', error);
        return null;
    }
}

// Usage
sendMessage(
    'your-chatflow-id',
    'What is artificial intelligence?',
    false
).then(result => {
    if (result) {
        console.log('Response:', result);
    }
});
```

{% endtab %}

{% tab title="cURL" %}

```bash
curl -X POST "http://localhost:3000/api/v1/prediction/your-chatflow-id" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "What is artificial intelligence?",
    "streaming": false
  }'
```

{% endtab %}
{% endtabs %}

## Advanced Features

### Form Input

In Agentflow V2, you can select `form` as input type.

<figure><img src="/files/yrCFMuMi2e4uUDLmw8P1" alt="" width="418"><figcaption></figcaption></figure>

You can override the value by Variable Name of the Form Input

```json
{
    "form": {
        "title": "Example",
        "count": 1,
        ...
    }
}
```

{% tabs %}
{% tab title="Python" %}

```python
import requests

def prediction(flow_id, form):
    url = f"http://localhost:3000/api/v1/prediction/{flow_id}"
    
    payload = {
        "form": form
    }
    
    try:
        response = requests.post(url, json=payload)
        response.raise_for_status()
        return response.json()
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")
        return None

result = prediction(
    flow_id="your-flow-id",
    form={
        "title": "ABC",
        "choices": "A"
    }
)

print(result)
```

{% endtab %}

{% tab title="JavaScript" %}

```javascript
async function prediction(flowId, form) {
    const url = `http://localhost:3000/api/v1/prediction/${flowId}`;
    
    const payload = {
        form: form
    };
    
    try {
        const response = await fetch(url, {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
            },
            body: JSON.stringify(payload)
        });
        
        if (!response.ok) {
            throw new Error(`HTTP error! status: ${response.status}`);
        }
        
        return await response.json();
        
    } catch (error) {
        console.error('Error:', error);
        return null;
    }
}

prediction(
    'your-flow-id',
    {
        "title": "ABC",
        "choices": "A"
    }
).then(result => {
    console.log(result);
});
```

{% endtab %}
{% endtabs %}

### Configuration Override

Override chatflow settings dynamically.

Override config is **disabled** by default for security reasons. Enable it from the top right: **Settings** → **Configuration** → **Security** tab:

<div align="right" data-full-width="false"><figure><img src="/files/Dxeb64eoQuftxvBjRb6g" alt=""><figcaption></figcaption></figure></div>

{% tabs %}
{% tab title="Python" %}

```python
import requests

def query_with_config(flow_id, question, config):
    url = f"http://localhost:3000/api/v1/prediction/{flow_id}"
    
    payload = {
        "question": question,
        "overrideConfig": config
    }
    
    try:
        response = requests.post(url, json=payload)
        response.raise_for_status()
        return response.json()
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")
        return None

# Example: Override session and return source documents
result = query_with_config(
    flow_id="your-flow-id",
    question="How does machine learning work?",
    config={
        "sessionId": "user-123",
        "temperature": 0.5,
        "maxTokens": 1000
    }
)

print(result)
```

{% endtab %}

{% tab title="JavaScript" %}

```javascript
async function queryWithConfig(flowId, question, config) {
    const url = `http://localhost:3000/api/v1/prediction/${flowId}`;
    
    const payload = {
        question: question,
        overrideConfig: config
    };
    
    try {
        const response = await fetch(url, {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
            },
            body: JSON.stringify(payload)
        });
        
        if (!response.ok) {
            throw new Error(`HTTP error! status: ${response.status}`);
        }
        
        return await response.json();
        
    } catch (error) {
        console.error('Error:', error);
        return null;
    }
}

// Example: Override session and return source documents
queryWithConfig(
    'your-flow-id',
    'How does machine learning work?',
    {
        sessionId: 'user-123',
        temperature: 0.5,
        maxTokens: 1000
    }
).then(result => {
    console.log(result);
});
```

{% endtab %}
{% endtabs %}

For `array` type, hovering over the info icon will shows the schema that can be overriden.

Array value from overrideConfig will concatenate with existing array values. For example, if existing `startState` has:

```json
{
  "key": "key1",
  "value": "value1"
}
```

And if we enable override:

<figure><img src="/files/fkB1cd6bT9se1PWkpiyP" alt=""><figcaption></figcaption></figure>

```json
"overrideConfig": {
    "startState": [
        {
            "key": "foo",
            "value": "bar"
        }
    ],
    "llmMessages": [
        {
            "role": "system",
            "content": "You are helpful assistant"
        }
    ]
}
```

The final `startState` will be:

```json
[
  {
    "key": "key1",
    "value": "value1"
  },
  {
    "key": "foo",
    "value": "bar"
  },
]
```

### Overriding Specific Node

By default, if multiple nodes share the same type and no node ID is specified, overriding a property will update that property across all matching nodes.

For example, there are 2 LLM nodes where I want to override the system message:

<figure><img src="/files/Pjkws4BmiygRVyMfIlVe" alt=""><figcaption></figcaption></figure>

After enabling the ability to override:

<figure><img src="/files/bO79baU0Nx6tJYEiEhDS" alt=""><figcaption></figcaption></figure>

I can override the system message for both LLMs like so:

```json
"overrideConfig": {
    "llmMessages": [
        {
            "role": "system",
            "content": "You are sarcastic"
        }
    ]
}
```

From the Execution, you can see the overriden system message:

<figure><img src="/files/hVDEP9PyqaOjpxQmlH76" alt=""><figcaption></figcaption></figure>

<figure><img src="/files/ugjZzjL9cpfXFyXvr3go" alt=""><figcaption></figcaption></figure>

In some cases, you might want to just override config for specific node. You can do so by specifying the node id **inside** the property you want to override.

For example:

```json
"overrideConfig": {
    "llmMessages": {
        "llmAgentflow_0": [
            {
                "role": "system",
                "content": "You are sweet"
            } 
        ],
        "llmAgentflow_1": [
            {
                "role": "system",
                "content": "You are smart"
            } 
        ]
    }
}
```

If you head back to Execution, you can see each LLM has the correct overriden value:

<figure><img src="/files/vzvPTU3pHwFVv4k0qDjc" alt=""><figcaption></figcaption></figure>

<figure><img src="/files/VMoA5aUEdO0EeFysofv9" alt=""><figcaption></figcaption></figure>

### Conversation History

Provide conversation context by including previous messages in the history array.

**History Message Format**

```json
{
    "role": "apiMessage" | "userMessage",
    "content": "Message content"
}
```

{% tabs %}
{% tab title="Python" %}

```python
import requests

def chat_with_history(flow_id, question, history):
    url = f"http://localhost:3000/api/v1/prediction/{flow_id}"
    
    payload = {
        "question": question,
        "history": history
    }
    
    try:
        response = requests.post(url, json=payload)
        response.raise_for_status()
        return response.json()
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")
        return None

# Example conversation with context
conversation_history = [
    {
        "role": "apiMessage",
        "content": "Hello! I'm an AI assistant. How can I help you today?"
    },
    {
        "role": "userMessage", 
        "content": "Hi, my name is Sarah and I'm learning about AI"
    },
    {
        "role": "apiMessage",
        "content": "Nice to meet you, Sarah! I'd be happy to help you learn about AI. What specific aspects interest you?"
    }
]

result = chat_with_history(
    flow_id="your-flow-id",
    question="Can you explain neural networks in simple terms?",
    history=conversation_history
)

print(result)
```

{% endtab %}

{% tab title="JavaScript" %}

```javascript
async function chatWithHistory(flowId, question, history) {
    const url = `http://localhost:3000/api/v1/prediction/${flowId}`;
    
    const payload = {
        question: question,
        history: history
    };
    
    try {
        const response = await fetch(url, {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
            },
            body: JSON.stringify(payload)
        });
        
        if (!response.ok) {
            throw new Error(`HTTP error! status: ${response.status}`);
        }
        
        return await response.json();
        
    } catch (error) {
        console.error('Error:', error);
        return null;
    }
}

// Example conversation with context
const conversationHistory = [
    {
        role: "apiMessage",
        content: "Hello! I'm an AI assistant. How can I help you today?"
    },
    {
        role: "userMessage", 
        content: "Hi, my name is Sarah and I'm learning about AI"
    },
    {
        role: "apiMessage",
        content: "Nice to meet you, Sarah! I'd be happy to help you learn about AI. What specific aspects interest you?"
    }
];

chatWithHistory(
    'your-flow-id',
    'Can you explain neural networks in simple terms?',
    conversationHistory
).then(result => {
    console.log(result);
});
```

{% endtab %}
{% endtabs %}

### Session Management

Use `sessionId` to maintain conversation state across multiple API calls. Each session maintains its own conversation context and memory.

{% tabs %}
{% tab title="Python" %}

```python
import requests

class FlowiseSession:
    def __init__(self, flow_id, session_id, base_url="http://localhost:3000"):
        self.flow_id= flow_id
        self.session_id = session_id
        self.base_url = base_url
        self.url = f"{base_url}/api/v1/prediction/{flow_id}"
    
    def send_message(self, question, **kwargs):
        payload = {
            "question": question,
            "overrideConfig": {
                "sessionId": self.session_id,
                **kwargs.get("overrideConfig", {})
            }
        }
        
        # Add any additional parameters
        for key, value in kwargs.items():
            if key != "overrideConfig":
                payload[key] = value
        
        try:
            response = requests.post(self.url, json=payload)
            response.raise_for_status()
            return response.json()
        except requests.exceptions.RequestException as e:
            print(f"Error: {e}")
            return None

# Usage
session = FlowiseSession(
    flow_id="your-flow-id",
    session_id="user-session-123"
)

# First message
response1 = session.send_message("Hello, my name is John")
print("Response 1:", response1)

# Second message - will remember the previous context
response2 = session.send_message("What's my name?")
print("Response 2:", response2)
```

{% endtab %}

{% tab title="JavaScript" %}

```javascript
class FlowiseSession {
    constructor(flowId, sessionId, baseUrl = 'http://localhost:3000') {
        this.flowId= flowId;
        this.sessionId = sessionId;
        this.baseUrl = baseUrl;
        this.url = `${baseUrl}/api/v1/prediction/${flowId}`;
    }
    
    async sendMessage(question) {
        const payload = {
            question: question,
            overrideConfig: {
                sessionId: this.sessionId
            }
        };
  
        try {
            const response = await fetch(this.url, {
                method: 'POST',
                headers: {
                    'Content-Type': 'application/json',
                },
                body: JSON.stringify(payload)
            });
            
            if (!response.ok) {
                throw new Error(`HTTP error! status: ${response.status}`);
            }
            
            return await response.json();
            
        } catch (error) {
            console.error('Error:', error);
            return null;
        }
    }
}

// Usage
const session = new FlowiseSession(
    'your-flow-id',
    'user-session-123'
);

async function conversationExample() {
    // First message
    const response1 = await session.sendMessage("Hello, my name is John");
    console.log("Response 1:", response1);
    
    // Second message - will remember the previous context
    const response2 = await session.sendMessage("What's my name?");
    console.log("Response 2:", response2);
}

conversationExample();
```

{% endtab %}
{% endtabs %}

### Variables

Pass dynamic variables to your flow using the `vars` property in `overrideConfig`. Variables can be used in your flow to inject dynamic content.

{% hint style="warning" %}
Variables must be created first before you can override it. Refer to [Variables](/using-flowise/variables)
{% endhint %}

{% tabs %}
{% tab title="Python" %}

```python
import requests

def send_with_variables(flow_id, question, variables):
    url = f"http://localhost:3000/api/v1/prediction/{flow_id}"
    
    payload = {
        "question": question,
        "overrideConfig": {
            "vars": variables
        }
    }
    
    try:
        response = requests.post(url, json=payload)
        response.raise_for_status()
        return response.json()
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")
        return None

# Example: Pass user information and preferences
result = send_with_variables(
    flow_id="your-flow-id",
    question="Create a personalized workout plan",
    variables={
        "user_name": "Alice",
        "fitness_level": "intermediate",
        "preferred_duration": "30 minutes",
        "equipment": "dumbbells, resistance bands",
        "goals": "strength training, flexibility"
    }
)

print(result)
```

{% endtab %}

{% tab title="JavaScript" %}

```javascript
async function sendWithVariables(flowId, question, variables) {
    const url = `http://localhost:3000/api/v1/prediction/${flowId}`;
    
    const payload = {
        question: question,
        overrideConfig: {
            vars: variables
        }
    };
    
    try {
        const response = await fetch(url, {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
            },
            body: JSON.stringify(payload)
        });
        
        if (!response.ok) {
            throw new Error(`HTTP error! status: ${response.status}`);
        }
        
        return await response.json();
        
    } catch (error) {
        console.error('Error:', error);
        return null;
    }
}

// Example: Pass user information and preferences
sendWithVariables(
    'your-flow-id',
    'Create a personalized workout plan',
    {
        user_name: 'Alice',
        fitness_level: 'intermediate',
        preferred_duration: '30 minutes',
        equipment: 'dumbbells, resistance bands',
        goals: 'strength training, flexibility'
    }
).then(result => {
    console.log(result);
});
```

{% endtab %}
{% endtabs %}

### Image Uploads

Upload images for visual analysis when your flow supports image processing. Refer to [Image](/using-flowise/uploads#image) for more reference.

**Upload Structure:**

```json
{
    "data": "", 
    "type": "",
    "name": ",
    "mime": "
}
```

**Data:** Base64 or URL of an image

**Type**: `url` or `file`

**Name:** name of the image

**Mime**: `image/png`, `image/jpeg`, `image/jpg`

{% tabs %}
{% tab title="Python (Base64)" %}

```python
import requests
import base64
import os

def upload_image(flow_id, question, image_path):
    # Read and encode image
    with open(image_path, 'rb') as image_file:
        encoded_image = base64.b64encode(image_file.read()).decode('utf-8')
    
    # Determine MIME type based on file extension
    mime_types = {
        '.png': 'image/png',
        '.jpg': 'image/jpeg',
        '.jpeg': 'image/jpeg',
        '.gif': 'image/gif',
        '.webp': 'image/webp'
    }

    file_ext = os.path.splitext(image_path)[1].lower()
    mime_type = mime_types.get(file_ext, 'image/png')
    
    url = f"http://localhost:3000/api/v1/prediction/{flow_id}"
    
    payload = {
        "question": question,
        "uploads": [
            {
                "data": f"data:{mime_type};base64,{encoded_image}",
                "type": "file",
                "name": os.path.basename(image_path),
                "mime": mime_type
            }
        ]
    }
    
    try:
        response = requests.post(url, json=payload)
        response.raise_for_status()
        return response.json()
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")
        return None

# Example usage
result = upload_image(
    flow_id="your-flow-id",
    question="Can you describe what you see in this image?",
    image_path="path/to/your/image.png"
)

print(result)
```

{% endtab %}

{% tab title="Python (URL)" %}

```python
import requests
import os

def upload_image_url(flow_id, question, image_url, image_name=None):
    """
    Upload an image using a URL instead of base64 encoding.
    This is more efficient for images that are already hosted online.
    """
    url = f"http://localhost:3000/api/v1/prediction/{flow_id}"
    
    # Extract filename from URL if not provided
    if not image_name:
        image_name = image_url.split('/')[-1]
        if '?' in image_name:
            image_name = image_name.split('?')[0]
    
    # Determine MIME type from URL extension
    mime_types = {
        '.png': 'image/png',
        '.jpg': 'image/jpeg',
        '.jpeg': 'image/jpeg',
        '.gif': 'image/gif',
        '.webp': 'image/webp'
    }
    
    file_ext = os.path.splitext(image_name)[1].lower()
    mime_type = mime_types.get(file_ext, 'image/jpeg')
    
    payload = {
        "question": question,
        "uploads": [
            {
                "data": image_url,
                "type": "url",
                "name": image_name,
                "mime": mime_type
            }
        ]
    }
    
    try:
        response = requests.post(url, json=payload)
        response.raise_for_status()
        return response.json()
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")
        return None

# Example usage with public image URL
result = upload_image_url(
    flow_id="your-flow-id",
    question="What's in this image? Analyze it in detail.",
    image_url="https://example.com/path/to/image.jpg",
    image_name="example_image.jpg"
)

print(result)

# Example with direct URL (no custom name)
result2 = upload_image_url(
    chatflow_id="your-chatflow-id",
    question="Describe this screenshot",
    image_url="https://i.imgur.com/sample.png"
)

print(result2)
```

{% endtab %}

{% tab title="JavaScript (File Upload)" %}

```javascript
async function uploadImage(flowId, question, imageFile) {
    return new Promise((resolve, reject) => {
        const reader = new FileReader();
        
        reader.onload = async function(e) {
            const base64Data = e.target.result;
            
            const payload = {
                question: question,
                uploads: [
                    {
                        data: base64Data,
                        type: 'file',
                        name: imageFile.name,
                        mime: imageFile.type
                    }
                ]
            };
            
            try {
                const response = await fetch(`http://localhost:3000/api/v1/prediction/${flowId}`, {
                    method: 'POST',
                    headers: {
                        'Content-Type': 'application/json',
                    },
                    body: JSON.stringify(payload)
                });
                
                if (!response.ok) {
                    throw new Error(`HTTP error! status: ${response.status}`);
                }
                
                const result = await response.json();
                resolve(result);
                
            } catch (error) {
                reject(error);
            }
        };
        
        reader.onerror = function() {
            reject(new Error('Failed to read file'));
        };
        
        reader.readAsDataURL(imageFile);
    });
}

// Example usage in browser
document.getElementById('imageInput').addEventListener('change', async function(e) {
    const file = e.target.files[0];
    if (file) {
        try {
            const result = await uploadImage(
                'your-flow-id',
                'Can you describe what you see in this image?',
                file
            );
            console.log('Analysis result:', result);
        } catch (error) {
            console.error('Upload failed:', error);
        }
    }
});
```

{% endtab %}

{% tab title="JavaScript (URL)" %}

```javascript
async function uploadImageUrl(flowId, question, imageUrl, imageName = null) {
    /**
     * Upload an image using a URL instead of base64 encoding.
     * This is more efficient for images that are already hosted online.
     */
    
    // Extract filename from URL if not provided
    if (!imageName) {
        imageName = imageUrl.split('/').pop();
        if (imageName.includes('?')) {
            imageName = imageName.split('?')[0];
        }
    }
    
    // Determine MIME type from URL extension
    const mimeTypes = {
        '.png': 'image/png',
        '.jpg': 'image/jpeg',
        '.jpeg': 'image/jpeg',
        '.gif': 'image/gif',
        '.webp': 'image/webp'
    };
    
    const fileExt = imageName.toLowerCase().substring(imageName.lastIndexOf('.'));
    const mimeType = mimeTypes[fileExt] || 'image/jpeg';
    
    const payload = {
        question: question,
        uploads: [
            {
                data: imageUrl,
                type: 'url',
                name: imageName,
                mime: mimeType
            }
        ]
    };
    
    try {
        const response = await fetch(`http://localhost:3000/api/v1/prediction/${flowId}`, {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
            },
            body: JSON.stringify(payload)
        });
        
        if (!response.ok) {
            throw new Error(`HTTP error! status: ${response.status}`);
        }
        
        return await response.json();
        
    } catch (error) {
        console.error('Error:', error);
        return null;
    }
}

// Example usage with public image URL
async function analyzeImageFromUrl() {
    try {
        const result = await uploadImageUrl(
            'your-flow-id',
            'What is in this image? Analyze it in detail.',
            'https://example.com/path/to/image.jpg',
            'example_image.jpg'
        );
        
        console.log('Analysis result:', result);
    } catch (error) {
        console.error('Upload failed:', error);
    }
}

// Example with direct URL (no custom name)
uploadImageUrl(
    'your-flow-id',
    'Describe this screenshot',
    'https://i.imgur.com/sample.png'
).then(result => {
    if (result) {
        console.log('Analysis result:', result);
    }
});

// Example with multiple image URLs
async function analyzeMultipleImages() {
    const imageUrls = [
        'https://example.com/image1.jpg',
        'https://example.com/image2.png',
        'https://example.com/image3.gif'
    ];
    
    const results = await Promise.all(
        imageUrls.map(url => 
            uploadImageUrl(
                'your-flow-id',
                `Analyze this image: ${url}`,
                url
            )
        )
    );
    
    results.forEach((result, index) => {
        console.log(`Image ${index + 1} analysis:`, result);
    });
}
```

{% endtab %}

{% tab title="JavaScript (Node.js)" %}

```javascript
const fs = require('fs');
const path = require('path');

async function uploadImage(flowId, question, imagePath) {
    // Read image file
    const imageBuffer = fs.readFileSync(imagePath);
    const base64Image = imageBuffer.toString('base64');
    
    // Determine MIME type
    const ext = path.extname(imagePath).toLowerCase();
    const mimeTypes = {
        '.png': 'image/png',
        '.jpg': 'image/jpeg',
        '.jpeg': 'image/jpeg',
        '.gif': 'image/gif',
        '.webp': 'image/webp'
    };
    const mimeType = mimeTypes[ext] || 'image/png';
    
    const payload = {
        question: question,
        uploads: [
            {
                data: `data:${mimeType};base64,${base64Image}`,
                type: 'file',
                name: path.basename(imagePath),
                mime: mimeType
            }
        ]
    };
    
    try {
        const response = await fetch(`http://localhost:3000/api/v1/prediction/${flowId}`, {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
            },
            body: JSON.stringify(payload)
        });
        
        if (!response.ok) {
            throw new Error(`HTTP error! status: ${response.status}`);
        }
        
        return await response.json();
        
    } catch (error) {
        console.error('Error:', error);
        return null;
    }
}

// Example usage
uploadImage(
    'your-flow-id',
    'Can you describe what you see in this image?',
    'path/to/your/image.png'
).then(result => {
    console.log('Analysis result:', result);
});
```

{% endtab %}
{% endtabs %}

### Audio Uploads (Speech to Text)

Upload audio files for speech-to-text processing. Refer to [Audio](/using-flowise/uploads#audio) for more reference.

**Upload Structure:**

```json
{
    "data": "", 
    "type": "",
    "name": ",
    "mime": "
}
```

**Data:** Base64 or URL of an audio

**Type**: `url` or `file`

**Name:** name of the audio

**Mime**: `audio/mp4`, `audio/webm`, `audio/wav`, `audio/mpeg`

{% tabs %}
{% tab title="Python (Base64)" %}

```python
import requests
import base64
import os

def upload_audio(flow_id, audio_path, question=None):
    # Read and encode audio
    with open(audio_path, 'rb') as audio_file:
        encoded_audio = base64.b64encode(audio_file.read()).decode('utf-8')
    
    # Determine MIME type based on file extension
    mime_types = {
        '.webm': 'audio/webm',
        '.wav': 'audio/wav',
        '.mp3': 'audio/mpeg',
        '.m4a': 'audio/mp4'
    }
 
    file_ext = os.path.splitext(audio_path)[1].lower()
    mime_type = mime_types.get(file_ext, 'audio/webm')
    
    url = f"http://localhost:3000/api/v1/prediction/{flow_id}"
    
    payload = {
        "uploads": [
            {
                "data": f"data:{mime_type};base64,{encoded_audio}",
                "type": "audio",
                "name": os.path.basename(audio_path),
                "mime": mime_type
            }
        ]
    }
    
    # Add question if provided
    if question:
        payload["question"] = question
    
    try:
        response = requests.post(url, json=payload)
        response.raise_for_status()
        return response.json()
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")
        return None

# Example usage
result = upload_audio(
    flow_id="your-flow-id",
    audio_path="path/to/your/audio.wav",
    question="Please transcribe this audio and summarize the content"
)

print(result)
```

{% endtab %}

{% tab title="Python (URL)" %}

```python
import requests
import os

def upload_audio_url(flow_id, audio_url, question=None, audio_name=None):
    """
    Upload an audio file using a URL instead of base64 encoding.
    This is more efficient for audio files that are already hosted online.
    """
    url = f"http://localhost:3000/api/v1/prediction/{flow_id}"
    
    # Extract filename from URL if not provided
    if not audio_name:
        audio_name = audio_url.split('/')[-1]
        if '?' in audio_name:
            audio_name = audio_name.split('?')[0]
    
    # Determine MIME type from URL extension
    mime_types = {
        '.webm': 'audio/webm',
        '.wav': 'audio/wav',
        '.mp3': 'audio/mpeg',
        '.m4a': 'audio/mp4',
        '.ogg': 'audio/ogg',
        '.aac': 'audio/aac'
    }

    file_ext = os.path.splitext(audio_name)[1].lower()
    mime_type = mime_types.get(file_ext, 'audio/wav')
    
    payload = {
        "uploads": [
            {
                "data": audio_url,
                "type": "url",
                "name": audio_name,
                "mime": mime_type
            }
        ]
    }
    
    # Add question if provided
    if question:
        payload["question"] = question
    
    try:
        response = requests.post(url, json=payload)
        response.raise_for_status()
        return response.json()
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")
        return None

# Example usage with public audio URL
result = upload_audio_url(
    flow_id="your-flow-id",
    audio_url="https://example.com/path/to/speech.mp3",
    question="Please transcribe this audio and provide a summary",
    audio_name="speech_recording.mp3"
)

print(result)

# Example with direct URL (no custom name or question)
result2 = upload_audio_url(
    flow_id="your-flow-id",
    audio_url="https://storage.googleapis.com/sample-audio/speech.wav"
)

print(result2)

# Example for meeting transcription
result3 = upload_audio_url(
    flow_id="your-flow-id",
    audio_url="https://meetings.example.com/recording-123.m4a",
    question="Transcribe this meeting recording and extract key action items and decisions made",
    audio_name="team_meeting_jan15.m4a"
)

print(result3)
```

{% endtab %}

{% tab title="JavaScript (File Upload)" %}

```javascript
async function uploadAudio(flowId, audioFile, question = null) {
    return new Promise((resolve, reject) => {
        const reader = new FileReader();
        
        reader.onload = async function(e) {
            const base64Data = e.target.result;
            
            const payload = {
                uploads: [
                    {
                        data: base64Data,
                        type: 'audio',
                        name: audioFile.name,
                        mime: audioFile.type
                    }
                ]
            };
            
            // Add question if provided
            if (question) {
                payload.question = question;
            }
            
            try {
                const response = await fetch(`http://localhost:3000/api/v1/prediction/${flowId}`, {
                    method: 'POST',
                    headers: {
                        'Content-Type': 'application/json',
                    },
                    body: JSON.stringify(payload)
                });
                
                if (!response.ok) {
                    throw new Error(`HTTP error! status: ${response.status}`);
                }
                
                const result = await response.json();
                resolve(result);
                
            } catch (error) {
                reject(error);
            }
        };
        
        reader.onerror = function() {
            reject(new Error('Failed to read file'));
        };
        
        reader.readAsDataURL(audioFile);
    });
}

// Example usage with file input
document.getElementById('audioInput').addEventListener('change', async function(e) {
    const file = e.target.files[0];
    if (file) {
        try {
            const result = await uploadAudio(
                'your-flow-id',
                file,
                'Please transcribe this audio and summarize the content'
            );
            console.log('Transcription result:', result);
        } catch (error) {
            console.error('Upload failed:', error);
        }
    }
});
```

{% endtab %}

{% tab title="JavaScript (URL)" %}

```javascript
async function uploadAudioUrl(flowId, audioUrl, question = null, audioName = null) {
    /**
     * Upload an audio file using a URL instead of base64 encoding.
     * This is more efficient for audio files that are already hosted online.
     */
    
    // Extract filename from URL if not provided
    if (!audioName) {
        audioName = audioUrl.split('/').pop();
        if (audioName.includes('?')) {
            audioName = audioName.split('?')[0];
        }
    }
    
    // Determine MIME type from URL extension
    const mimeTypes = {
        '.webm': 'audio/webm',
        '.wav': 'audio/wav',
        '.mp3': 'audio/mpeg',
        '.m4a': 'audio/mp4',
        '.ogg': 'audio/ogg',
        '.aac': 'audio/aac'
    };
    
    const fileExt = audioName.toLowerCase().substring(audioName.lastIndexOf('.'));
    const mimeType = mimeTypes[fileExt] || 'audio/wav';
    
    const payload = {
        uploads: [
            {
                data: audioUrl,
                type: 'url',
                name: audioName,
                mime: mimeType
            }
        ]
    };
    
    // Add question if provided
    if (question) {
        payload.question = question;
    }
    
    try {
        const response = await fetch(`http://localhost:3000/api/v1/prediction/${flowId}`, {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
            },
            body: JSON.stringify(payload)
        });
        
        if (!response.ok) {
            throw new Error(`HTTP error! status: ${response.status}`);
        }
        
        return await response.json();
        
    } catch (error) {
        console.error('Error:', error);
        return null;
    }
}

// Example usage with public audio URL
async function transcribeAudioFromUrl() {
    try {
        const result = await uploadAudioUrl(
            'your-flow-id',
            'https://example.com/path/to/speech.mp3',
            'Please transcribe this audio and provide a summary',
            'speech_recording.mp3'
        );
        
        console.log('Transcription result:', result);
    } catch (error) {
        console.error('Upload failed:', error);
    }
}

// Example with direct URL (no custom name or question)
uploadAudioUrl(
    'your-flow-id',
    'https://storage.googleapis.com/sample-audio/speech.wav'
).then(result => {
    if (result) {
        console.log('Transcription result:', result);
    }
});

// Example for meeting transcription
uploadAudioUrl(
    'your-flow-id',
    'https://meetings.example.com/recording-123.m4a',
    'Transcribe this meeting recording and extract key action items and decisions made',
    'team_meeting_jan15.m4a'
).then(result => {
    if (result) {
        console.log('Meeting analysis:', result);
    }
});

// Example with multiple audio URLs for batch processing
async function transcribeMultipleAudios() {
    const audioUrls = [
        {
            url: 'https://example.com/interview1.wav',
            question: 'Transcribe this interview and summarize key points',
            name: 'interview_candidate_1.wav'
        },
        {
            url: 'https://example.com/interview2.mp3',
            question: 'Transcribe this interview and summarize key points',
            name: 'interview_candidate_2.mp3'
        },
        {
            url: 'https://example.com/lecture.m4a',
            question: 'Transcribe this lecture and create bullet-point notes',
            name: 'cs101_lecture.m4a'
        }
    ];
    
    const results = await Promise.all(
        audioUrls.map(audio => 
            uploadAudioUrl(
                'your-flow-id',
                audio.url,
                audio.question,
                audio.name
            )
        )
    );
    
    results.forEach((result, index) => {
        console.log(`Audio ${index + 1} transcription:`, result);
    });
}
```

{% endtab %}
{% endtabs %}

### File Uploads

Upload files to have LLM process the files and answer query related to the files. Refer to [Files](/using-flowise/uploads#files) for more reference.

### Human Input

When execution/conversation is halted requiring [human's input](/tutorials/human-in-the-loop), the response body will returns an `action` data. Users can use this to render specific actions from UI.

{% hint style="warning" %}
Remember to save `chatId` as it is needed to resume the execution/conversation.
{% endhint %}

```json
{
    "text": "I'm just a computer program, but I'm here and ready to help you! How can I assist you today?\n\nProceed?",
    "question": "Hey, how are you?",
    "chatId": "c5a49fa0-3609-448c-bb36-b9937a34390e",
    "chatMessageId": "04fdaea7-276a-455c-9d18-479653b76c13",
    "executionId": "2cee26fa-5d6d-472b-a82d-fe1a5e2cafeb",
    "agentFlowExecutedData": [
        ...
    ],
    "action": {
        "id": "fcadd7ad-f5e0-4ca8-b11d-e2463aa20d0c",
        "mapping": {
            "approve": "Proceed",
            "reject": "Reject"
        },
        "elements": [
            {
                "type": "agentflowv2-approve-button",
                "label": "Proceed"
            },
            {
                "type": "agentflowv2-reject-button",
                "label": "Reject"
            }
        ],
        "data": {
            "nodeId": "humanInputAgentflow_0",
            "nodeLabel": "Human Input 0",
            "input": {
                "messages": [
                    {
                        "role": "user",
                        "content": "Hey, how are you?"
                    },
                    {
                        "role": "user",
                        "content": "I'm just a computer program, but I'm here and ready to help you! How can I assist you today?",
                        "name": "agent_0"
                    }
                ],
                "humanInputEnableFeedback": true
            }
        }
    }
}
```

#### How UI renders approve/reject/feedback

Flowise Chat UI reads `action.elements` and renders buttons based on element `type`:

* <mark style="color:$success;">**`agentflowv2-approve-button`**</mark> — green outlined button with a checkmark icon
* <mark style="color:$danger;">**`agentflowv2-reject-button`**</mark> — red outlined button with an X icon

When clicked, if `action.data.input.humanInputEnableFeedback` is `true`, a feedback dialog (text area) is shown before submitting. Otherwise it submits immediately.

For your own Chat UI, render the action like this:

```javascript
// Pseudo-code for a custom UI
if (response.action) {
  const { elements, data } = response.action
  const showFeedback = data?.input?.humanInputEnableFeedback

  elements.forEach(elem => {
    if (elem.type === 'agentflowv2-approve-button') {
      // Render a green "Proceed" button
    }
    if (elem.type === 'agentflowv2-reject-button') {
      // Render a red "Reject" button
    }
  })

  // On button click:
  //   - If showFeedback is true, show a text input for feedback first
  //   - Then call the prediction API to resume (see below)
}
```

#### How to resume the conversation

When the user clicks approve or reject, send another `POST /api/v1/prediction/{chatflowId}` with the `humanInput` field:

```json
{
  "chatId": "c5a49fa0-3609-448c-bb36-b9937a34390e",
  "humanInput": {
    "type": "proceed",
    "startNodeId": "humanInputAgentflow_0",
    "feedback": ""
  }
}
```

The three key fields in `humanInput`:

| Field         | Value                     | Description                                                                |
| ------------- | ------------------------- | -------------------------------------------------------------------------- |
| `type`        | `"proceed"` or `"reject"` | Maps from the button type — `approve` → `"proceed"`, `reject` → `"reject"` |
| `startNodeId` |                           | From `action.data.nodeId` — tells the server which node to resume          |
| `feedback`    |                           | Optional feedback text if `humanInputEnableFeedback` was true              |

**Example — reject with feedback:**

```json
{
  "chatId": "c5a49fa0-3609-448c-bb36-b9937a34390e",
  "humanInput": {
    "type": "reject",
    "startNodeId": "humanInputAgentflow_0",
    "feedback": "I think we should use a different approach"
  }
}
```

The server will find the pending action, clear it, and resume the agentflow from the `startNodeId` with the user's decision.

## Troubleshooting

1. **404 Not Found**: Verify the flow ID is correct and the flow exists
2. **401 Unauthorized Access**: Verify if the flow is API key protected
3. **400 Bad Request**: Check request format and required fields
4. **413 Payload Too Large**: Reduce file sizes or split large requests
5. **500 Internal Server Error:** Check if there is any misconfiguration from the nodes in the flow


# Streaming

Learn how Flowise streaming works

If streaming is set when making prediction, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available.

### Using Python/TS Library

Flowise provides 2 libraries:

* [Python](https://pypi.org/project/flowise/): `pip install flowise`
* [Typescript](https://www.npmjs.com/package/flowise-sdk): `npm install flowise-sdk`

{% tabs %}
{% tab title="Python" %}

```python
from flowise import Flowise, PredictionData

def test_streaming():
    client = Flowise()

    # Test streaming prediction
    completion = client.create_prediction(
        PredictionData(
            chatflowId="<flow-id>",
            question="Tell me a joke!",
            streaming=True
        )
    )

    # Process and print each streamed chunk
    print("Streaming response:")
    for chunk in completion:
        # {event: "token", data: "hello"}
        print(chunk)


if __name__ == "__main__":
    test_streaming()
```

{% endtab %}

{% tab title="Typescript" %}

```javascript
import { FlowiseClient } from 'flowise-sdk'

async function test_streaming() {
  const client = new FlowiseClient({ baseUrl: 'http://localhost:3000' });

  try {
    // For streaming prediction
    const prediction = await client.createPrediction({
      chatflowId: '<flow-id>',
      question: 'What is the capital of France?',
      streaming: true,
    });

    for await (const chunk of prediction) {
        // {event: "token", data: "hello"}
        console.log(chunk);
    }
    
  } catch (error) {
    console.error('Error:', error);
  }
}

// Run streaming test
test_streaming()
```

{% endtab %}

{% tab title="cURL" %}

```bash
curl https://localhost:3000/api/v1/predictions/{flow-id} \
  -H "Content-Type: application/json" \
  -d '{
    "question": "Hello world!",
    "streaming": true
  }'
```

{% endtab %}
{% endtabs %}

```html
event: token
data: Once upon a time...
```

A prediction's event stream consists of the following event types:

| Event           | Description                                                                                                                         |
| --------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| start           | The start of streaming                                                                                                              |
| token           | Emitted when the prediction is streaming new token output                                                                           |
| error           | Emitted when the prediction returns an error                                                                                        |
| end             | Emitted when the prediction finishes                                                                                                |
| metadata        | All metadata such as chatId, messageId, of the related flow. Emitted after all tokens have finished streaming, and before end event |
| sourceDocuments | Emitted when the flow returns sources from vector store                                                                             |
| usedTools       | Emitted when the flow used tools                                                                                                    |

### Streamlit App

<https://github.com/HenryHengZJ/flowise-streamlit>


# Document Stores

Learn how to use the Flowise Document Stores, written by @toi500

***

Flowise's Document Stores offer a versatile approach to data management, enabling you to upload, split, and prepare your dataset and upsert it in a single location.

This centralized approach simplifies data handling and allows for efficient management of various data formats, making it easier to organize and access your data within the Flowise app.

## Setup

In this tutorial, we will set up a [Retrieval Augmented Generation (RAG)](https://github.com/FlowiseAI/FlowiseDocs/blob/main/en/using-flowise/broken-reference/README.md) system to retrieve information about the *LibertyGuard Deluxe Homeowners Policy*, a topic that LLMs are not extensively trained on.

Using the **Flowise Document Stores**, we'll prepare and upsert data about LibertyGuard and its set of home insurance policies. This will enable our RAG system to accurately answer user queries about LibertyGuard's home insurance offerings.

## 1. Add a Document Store

Start by adding a Document Store and naming it. In our case, "LibertyGuard Deluxe Homeowners Policy".

<figure><img src="/files/Fltu1xVaAt1qDjkYrjzJ" alt=""><figcaption></figcaption></figure>

## 2. Select a Document Loader

Enter the Document Store that you just created and select the [Document Loader](/integrations/langchain/document-loaders) you want to use. In our case, since our dataset is in PDF format, we'll use the [PDF Loader](/integrations/langchain/document-loaders/pdf-file).

Document Loaders are specialized nodes that handle the ingestion of various document formats.

<figure><img src="/files/viSoS9ChJl4zv3dIdvxY" alt=""><figcaption></figcaption></figure>

<figure><img src="/files/DMxAy5m7zZOzzQqJHp26" alt=""><figcaption></figcaption></figure>

## 3. Prepare Your Data

### Step 1: Document Loader

* First, we start by uploading our PDF file.
* Then, we add a **unique metadata key**. This is optional, but a good practice as it allows us to target and filter down this same dataset later on if we need to.
* Every loader comes with preconfigured metadata, in some cases you can use Omit Metadata Keys to remove unnecessary metadata.

<figure><img src="/files/lqAJsPd5jLITdxXCjb00" alt=""><figcaption></figcaption></figure>

### Step 2: Text Splitter

* Select the [Text Splitter](/integrations/langchain/text-splitters) you want to use to chunk your data. In our particular case, we will use the [Recursive Character Text Splitter](/integrations/langchain/text-splitters/recursive-character-text-splitter).
* Text splitter is used to split the loaded documents into smaller pieces, documents, or chunks. This is a crucial preprocessing step for 2 main reasons:

  * **Retrieval speed and relevance:** Storing and querying large documents as single entities in a vector database can lead to slower retrieval times and potentially less relevant results. Splitting the document into smaller chunks allows for more targeted retrieval. By querying against smaller, more focused units of information, we can achieve faster response times and improve the precision of the retrieved results.
  * **Cost-effective:** Since we only retrieve relevant chunks rather than the entire document, the number of tokens processed by the LLM is significantly reduced. This targeted retrieval approach directly translates to lower usage costs for our LLM, as billing is typically based on token consumption. By minimizing the amount of irrelevant information sent to the LLM, we also optimize for cost.

  There are different text chunking strategies, including:

  * **Character Text Splitting:** Dividing the text into chunks of a fixed number of characters. This method is straightforward but may split words or phrases across chunks, potentially disrupting context.
  * **Token Text Splitting:** Segmenting the text based on word boundaries or tokenization schemes specific to the chosen embedding model. This approach often leads to more semantically coherent chunks, as it preserves word boundaries and considers the underlying linguistic structure of the text.
  * **Recursive Character Text Splitting:** This strategy aims to divide text into chunks that maintain semantic coherence while staying within a specified size limit. It's particularly well-suited for hierarchical documents with nested sections or headings. Instead of blindly splitting at the character limit, it recursively analyzes the text to find logical breakpoints, such as sentence endings or section breaks. This approach ensures that each chunk represents a meaningful unit of information, even if it slightly exceeds the target size.
  * **Markdown Text Splitter:** Designed specifically for markdown-formatted documents, this splitter logically segments the text based on markdown headings and structural elements, creating chunks that correspond to logical sections within the document.
  * **Code Text Splitter:** Tailored for splitting code files, this strategy considers code structure, function definitions, and other programming language-specific elements to create meaningful chunks that are suitable for tasks like code search and documentation.
  * **HTML-to-Markdown Text Splitter:** This specialized splitter first converts HTML content to Markdown and then applies the Markdown Text Splitter, allowing for structured segmentation of web pages and other HTML documents.

  You can also customize the parameters such as:

  * **Chunk Size:** The desired maximum size of each chunk, usually defined in characters or tokens.
  * **Chunk Overlap:** The number of characters or tokens to overlap between consecutive chunks, useful for maintaining contextual flow across chunks.

{% hint style="info" %}
In this guide, we've added a generous **Chunk Overlap** size to ensure no relevant data gets missed between chunks. However, the optimal overlap size is dependent on the complexity of your data. You may need to adjust this value based on your specific dataset and the nature of the information you want to extract.
{% endhint %}

<figure><img src="/files/DRXN4CpED7BbQ2XqiXpU" alt="" width="563"><figcaption></figcaption></figure>

## 4. Preview Your Data

We can now preview how our data will be chunked using our current [Text Splitter](/integrations/langchain/text-splitters) configuration; `chunk_size=1500`and `chunk_overlap=750`.

<figure><img src="/files/8fhg4viPucm0D2hVdXZP" alt=""><figcaption></figcaption></figure>

It's important to experiment with different [Text Splitters](/integrations/langchain/text-splitters), Chunk Sizes, and Overlap values to find the optimal configuration for your specific dataset. This preview allows you to refine the chunking process and ensure that the resulting chunks are suitable for your RAG system.

<figure><img src="/files/0pyvODpqraP0OeLoFt46" alt=""><figcaption></figcaption></figure>

{% hint style="info" %}
Note that our custom metadata `company: "liberty"` has been inserted into each chunk. This metadata allows us to easily filter and retrieve information from this specific dataset later on, even if we use the same vector store index for other datasets.
{% endhint %}

### Understanding Chunk Overlap <a href="#understanding-chunk-overlap" id="understanding-chunk-overlap"></a>

In the context of vector-based retrieval and LLM querying, chunk overlap plays an **important role in maintaining contextual continuity** and **improving response accuracy**, especially when dealing with limited retrieval depth or **top K**, which is the parameter that determines the maximum number of most similar chunks that are retrieved from the [Vector Store](https://docs.flowiseai.com/integrations/langchain/vector-stores) in response to a query.

During query processing, the LLM executes a similarity search against the Vector Store to retrieve the most semantically relevant chunks to the given query. If the retrieval depth, represented by the top K parameter, is set to a small value, 4 for default, the LLM initially uses information only from these 4 chunks to generate its response.

This scenario presents us with a problem, since relying solely on a limited number of chunks without overlap can lead to incomplete or inaccurate answers, particularly when dealing with queries that require information spanning multiple chunks.

Chunk overlap helps with this issue by ensuring that a portion of the textual context is shared across consecutive chunks, **increasing the likelihood that all relevant information for a given query is contained within the retrieved chunks**.

In other words, this overlap serves as a bridge between chunks, enabling the LLM to access a wider contextual window even when limited to a small set of retrieved chunks (top K). If a query relates to a concept or piece of information that extends beyond a single chunk, the overlapping regions increase the likelihood of capturing all the necessary context.

Therefore, by introducing chunk overlap during the text splitting phase, we enhance the LLM's ability to:

1. **Preserve contextual continuity:** Overlapping chunks provide a smoother transition of information between consecutive segments, allowing the model to maintain a more coherent understanding of the text.
2. **Improve retrieval accuracy:** By increasing the probability of capturing all relevant information within the target top K retrieved chunks, overlap contributes to more accurate and contextually appropriate responses.

### Accuracy vs. Cost <a href="#accuracy-vs.-cost" id="accuracy-vs.-cost"></a>

So, to further optimize the trade-off between retrieval accuracy and cost, two primary strategies can be used:

1. **Increase/Decrease Chunk Overlap:** Adjusting the overlap percentage during text splitting allows for fine-grained control over the amount of shared context between chunks. Higher overlap percentages generally lead to improved context preservation but may also increase costs since you would need to use more chunks to encompass the entire document. Conversely, lower overlap percentages can reduce costs but risk losing key contextual information between chunks, potentially leading to less accurate or incomplete answers from the LLM.
2. **Increase/Decrease Top K:** Raising the default top K value (4) expands the number of chunks considered for response generation. While this can improve accuracy, it also increases cost.

**Tip:** The choice of optimal **overlap** and **top K** values depends on factors such as document complexity, embedding model characteristics, and the desired balance between accuracy and cost. Experimentation with these values is important for finding the ideal configuration for a specific need.

## 5. Process Your Data

Once you are satisfied with the chunking process, it's time to process your data.

<figure><img src="/files/JOD0qbkuGFQjx4H4J0qw" alt=""><figcaption></figcaption></figure>

After processing your data, you retain the ability to refine individual chunks by deleting or adding content. This granular control offers several advantages:

* **Enhanced Accuracy:** Identify and rectify inaccuracies or inconsistencies present in the original data, ensuring the information used in your application is reliable.
* **Improved Relevance:** Refine chunk content to emphasize key information and remove irrelevant sections, thereby increasing the precision and effectiveness of your retrieval process.
* **Query Optimization:** Tailor chunks to better align with anticipated user queries, making them more targeted and improving the overall user experience.

## 6. Configure the Upsert Process

With our data properly processed - loaded via a Document Loader and appropriately chunked -, we can now proceed to configure the upsert process.

<figure><img src="/files/KVWmg8GJkisqMjgoC6h2" alt=""><figcaption></figcaption></figure>

The upsert process comprises three fundamental steps:

* **Embedding:** We begin by choosing the appropriate embedding model to encode our dataset. This model will transform our data into a numerical vector representation.
* **Vector Store:** Next, we determine the Vector Store where our dataset will reside.
* **Record Manager (Optional):** Finally, we have the option to implement a Record Manager. This component provides the functionalities for managing our dataset once it's stored within the Vector Store.

<figure><img src="/files/oPFLMTeqXAOAn1FFwsXh" alt=""><figcaption></figcaption></figure>

### Step 1: Select Embeddings

Click on the "Select Embeddings" card and choose your preferred [embedding model](/integrations/langchain/embeddings). In our case, we will select OpenAI as the embedding provider and use the `text-embedding-ada-002` model with `1536` dimensions.

Embedding is the process of converting text into a numerical representation that captures its meaning. This numerical representation, also called the embedding vector, is a multi-dimensional array of numbers, where each dimension represents a specific aspect of the text's meaning.

These vectors allow LLMs to compare and search for similar pieces of text within the vector store by measuring the distance or similarity between them in this multi-dimensional space.

#### Understanding Embeddings/Vector Store dimensions <a href="#understanding-embeddings-vector-store-dimensions" id="understanding-embeddings-vector-store-dimensions"></a>

The number of dimensions in a Vector Store index is determined by the embedding model used when we upsert our data, and vice versa. Each dimension represents a specific feature or concept within the data. For example, a **dimension** might **represent a particular topic, sentiment, or other aspect of the text**.

The more dimensions we use to embed our data, the greater the potential for capturing nuanced meaning from our text. However, this increase comes at the cost of higher computational requirements per query.

In general, a larger number of dimensions needs more resources to store, process, and compare the resulting embedding vectors. Therefore, embeddings models like the Google `embedding-001`, which uses 768 dimensions, are, in theory, cheaper than others like the OpenAI `text-embedding-3-large`, with 3072 dimensions.

It's important to note that the **relationship between dimensions and meaning capture isn't strictly linear**; there's a point of diminishing returns where adding more dimensions provides negligible benefit for the added unnecessary cost.

{% hint style="warning" %}
To ensure compatibility between an embedding model and a Vector Store index, dimensional alignment is essential. Both **the embedding model and the vector store index must have the same number of dimensions**. Dimensionality mismatch will result in upsertion errors, as the Vector Store is designed to handle vectors of a specific size determined by the chosen embedding model.
{% endhint %}

<figure><img src="/files/qZwx2ZZLMaciGunJc3Kg" alt=""><figcaption></figcaption></figure>

### Step 2: Select Vector Store

Click on the "Select Vector Store" card and choose your preferred [Vector Store](/integrations/langchain/vector-stores). In our case, as we need a production-ready option, we will select Upstash.

Vector store is a special type of database that is used to store the vector embeddings. We can finetune parameters like "**top K**" that determines the maximum number of most similar chunks that are retrieved from the Vector Store in response to a query.

{% hint style="info" %}
A lower top K value will yield fewer but potentially more relevant results, while a higher value will return a broader range of results, potentially capturing more information.
{% endhint %}

<figure><img src="/files/z0p0tkUKBSbBG5eeiAiI" alt=""><figcaption></figcaption></figure>

### Step 3: Select Record Manager

Record Manager is an optional but incredibly useful addition to our upserting flow. It allows us to maintain records of all the chunks that have been upserted to our Vector Store, enabling us to efficiently add or delete chunks as needed.

In other words, any changes to your documents during a new upsert will not result in duplicate vector embeddings being stored in the vector store.

Detailed instructions on how to set up and utilize this feature can be found in the dedicated [guide](/integrations/langchain/record-managers).

<figure><img src="/files/q1MBAuSlWRj4QK6t1FoB" alt=""><figcaption></figcaption></figure>

## 7. Upsert Your Data to a Vector Store

To begin the upsert process and transfer your data to the Vector Store, click the "Upsert" button.

<figure><img src="/files/xKIeWzhWTZBrHsMYxWFv" alt=""><figcaption></figcaption></figure>

As illustrated in the image below, our data has been successfully upserted into the Upstash vector database. The data was divided into 85 chunks to optimize the upsertion process and ensure efficient storage and retrieval.

<figure><img src="/files/vWu9wQxTFU0ReDY3SE8x" alt="" width="375"><figcaption></figcaption></figure>

## 8. Test Your Dataset

To quickly test the functionality of your dataset without navigating away from the Document Store, simply utilize the "Retrieval Query" button. This initiates a test query, allowing you to verify the accuracy and effectiveness of your data retrieval process.

<figure><img src="/files/u0Al3xrsbISxrVaGeT0h" alt=""><figcaption></figcaption></figure>

In our case, we see that when querying for information about kitchen flooring coverage in our insurance policy, we retrieve 4 relevant chunks from Upstash, our designated Vector Store. This retrieval is limited to 4 chunks as per the defined "top k" parameter, ensuring we receive the most pertinent information without unnecessary redundancy.

<figure><img src="/files/77h9VWoJsj008WXHujmb" alt=""><figcaption></figcaption></figure>

## 9. Test Your RAG

Finally, our Retrieval-Augmented Generation (RAG) system is operational. It's noteworthy how the LLM effectively interprets the query and successfully leverages relevant information from the chunked data to construct a comprehensive response.

#### Agentflow

With an Agent node, you can add the document store:

<figure><img src="/files/PB3n0Bd1VcOk3tCYQuiB" alt="" width="300"><figcaption></figcaption></figure>

<figure><img src="/files/LiqLdBo5rRBVgMaypjpt" alt="" width="407"><figcaption></figcaption></figure>

Or directly connect to vector database and embedding mode:

<figure><img src="/files/AMWifyBhABeHKJS2w5qR" alt="" width="394"><figcaption></figcaption></figure>

#### Chatflow

You can use the vector store that was configured earlier:

<figure><img src="/files/ltirbfucHM6bEiDMRqgi" alt=""><figcaption></figcaption></figure>

Or, use the Document Store (Vector):

<figure><img src="/files/pEdZopKWnv0wmOzyVNSR" alt=""><figcaption></figcaption></figure>

## 10. API

There are also APIs support for creating, updating and deleting document store. In this section, we are going to highlight the 2 of the most used APIs:

* Upsert
* Refresh

For details, see the [Document Store API Reference](/api-reference/document-store).

### Upsert API

There are a few different scenarios for upserting process, and each have different outcomes.

#### Scenario 1: In the same document store, use an existing document loader configuration, upsert as new document loader.

<figure><img src="/files/MO8ZWYsM1TvlScYnFNAy" alt="" width="496"><figcaption></figcaption></figure>

{% hint style="success" %}
**`docId`** represents the existing document loader ID. It is required in the request body for this scenario.
{% endhint %}

{% tabs %}
{% tab title="Python" %}

```python
import requests
import json

DOC_STORE_ID = "your_doc_store_id"
DOC_LOADER_ID = "your_doc_loader_id"
API_URL = f"http://localhost:3000/api/v1/document-store/upsert/{DOC_STORE_ID}"
API_KEY = "your_api_key_here"

form_data = {
    "files": ('my-another-file.pdf', open('my-another-file.pdf', 'rb'))
}

body_data = {
    "docId": DOC_LOADER_ID
}

headers = {
    "Authorization": f"Bearer {BEARER_TOKEN}"
}

def query(form_data):
    response = requests.post(API_URL, files=form_data, data=body_data, headers=headers)
    print(response)
    return response.json()

output = query(form_data)
print(output)
```

{% endtab %}

{% tab title="Javascript" %}

```javascript
const DOC_STORE_ID = "your_doc_store_id"
const DOC_LOADER_ID = "your_doc_loader_id"

let formData = new FormData();
formData.append("files", input.files[0]);
formData.append("docId", DOC_LOADER_ID)

async function query(formData) {
    const response = await fetch(
        `http://localhost:3000/api/v1/document-store/upsert/${DOC_STORE_ID}`,
        {
            method: "POST",
            headers: {
                "Authorization": "Bearer <your_api_key_here>"
            },
            body: formData
        }
    );
    const result = await response.json();
    return result;
}

query(formData).then((response) => {
    console.log(response);
});
```

{% endtab %}
{% endtabs %}

#### Scenario 2: In the same document store, replace an existing document loader with new files.

<figure><img src="/files/V73TmUGuHaIr0a5TJFSQ" alt="" width="563"><figcaption></figcaption></figure>

{% hint style="success" %}
**`docId`** and **`replaceExisting`** are both required in the request body for this scenario.
{% endhint %}

{% tabs %}
{% tab title="Python" %}

```python
import requests
import json

DOC_STORE_ID = "your_doc_store_id"
DOC_LOADER_ID = "your_doc_loader_id"
API_URL = f"http://localhost:3000/api/v1/document-store/upsert/{DOC_STORE_ID}"
API_KEY = "your_api_key_here"

form_data = {
    "files": ('my-another-file.pdf', open('my-another-file.pdf', 'rb'))
}

body_data = {
    "docId": DOC_LOADER_ID,
    "replaceExisting": True
}

headers = {
    "Authorization": f"Bearer {BEARER_TOKEN}"
}

def query(form_data):
    response = requests.post(API_URL, files=form_data, data=body_data, headers=headers)
    print(response)
    return response.json()

output = query(form_data)
print(output)
```

{% endtab %}

{% tab title="Javascript" %}

```javascript
const DOC_STORE_ID = "your_doc_store_id";
const DOC_LOADER_ID = "your_doc_loader_id";

let formData = new FormData();
formData.append("files", input.files[0]);
formData.append("docId", DOC_LOADER_ID);
formData.append("replaceExisting", true);

async function query(formData) {
    const response = await fetch(
        `http://localhost:3000/api/v1/document-store/upsert/${DOC_STORE_ID}`,
        {
            method: "POST",
            headers: {
                "Authorization": "Bearer <your_api_key_here>"
            },
            body: formData
        }
    );
    const result = await response.json();
    return result;
}

query(formData).then((response) => {
    console.log(response);
});
```

{% endtab %}
{% endtabs %}

#### Scenario 3: In the same document store, upsert as new document loader from scratch.

<figure><img src="/files/ymXLgKvbzZTlPsZOPEB7" alt="" width="439"><figcaption></figcaption></figure>

{% hint style="success" %}
**`loader`, `splitter`, `embedding`, `vectorStore`** are all required in the request body for this scenario. **`recordManager`** is optional.
{% endhint %}

{% tabs %}
{% tab title="Python" %}

```python
import requests
import json

DOC_STORE_ID = "your_doc_store_id"
API_URL = f"http://localhost:3000/api/v1/document-store/upsert/{DOC_STORE_ID}"
API_KEY = "your_api_key_here"

form_data = {
    "files": ('my-another-file.pdf', open('my-another-file.pdf', 'rb'))
}

loader = {
    "name": "pdfFile",
    "config": {} # you can leave empty to use default config
}

splitter = {
    "name": "recursiveCharacterTextSplitter",
    "config": {
        "chunkSize": 1400,
        "chunkOverlap": 100
    }
}

embedding = {
    "name": "openAIEmbeddings",
    "config": {
        "modelName": "text-embedding-ada-002",
        "credential": <your_credential_id>
    }
}

vectorStore = {
    "name": "pinecone",
    "config": {
        "pineconeIndex": "exampleindex",
        "pineconeNamespace": "examplenamespace",
        "credential":  <your_credential_i
    }
}

body_data = {
    "docId": DOC_LOADER_ID,
    "loader": json.dumps(loader),
    "splitter": json.dumps(splitter),
    "embedding": json.dumps(embedding),
    "vectorStore": json.dumps(vectorStore)
}

headers = {
    "Authorization": f"Bearer {BEARER_TOKEN}"
}

def query(form_data):
    response = requests.post(API_URL, files=form_data, data=body_data, headers=headers)
    print(response)
    return response.json()

output = query(form_data)
print(output)
```

{% endtab %}

{% tab title="Javascript" %}

```javascript
const DOC_STORE_ID = "your_doc_store_id";
const API_URL = `http://localhost:3000/api/v1/document-store/upsert/${DOC_STORE_ID}`;
const API_KEY = "your_api_key_here";

const formData = new FormData();
formData.append("files", new Blob([await (await fetch('my-another-file.pdf')).blob()]), "my-another-file.pdf");

const loader = {
    name: "pdfFile",
    config: {} // You can leave empty to use the default config
};

const splitter = {
    name: "recursiveCharacterTextSplitter",
    config: {
        chunkSize: 1400,
        chunkOverlap: 100
    }
};

const embedding = {
    name: "openAIEmbeddings",
    config: {
        modelName: "text-embedding-ada-002",
        credential: "your_credential_id"
    }
};

const vectorStore = {
    name: "pinecone",
    config: {
        pineconeIndex: "exampleindex",
        pineconeNamespace: "examplenamespace",
        credential: "your_credential_id"
    }
};

const bodyData = {
    docId: "DOC_LOADER_ID",
    loader: JSON.stringify(loader),
    splitter: JSON.stringify(splitter),
    embedding: JSON.stringify(embedding),
    vectorStore: JSON.stringify(vectorStore)
};

const headers = {
    "Authorization": `Bearer BEARER_TOKEN`
};

async function query() {
    try {
        const response = await fetch(API_URL, {
            method: "POST",
            headers: headers,
            body: formData
        });

        const result = await response.json();
        console.log(result);
        return result;
    } catch (error) {
        console.error("Error:", error);
    }
}

query();

```

{% endtab %}
{% endtabs %}

{% hint style="danger" %}
Creating from scratch is not recommended as it exposes your credential ID. The recommended way is to create a placeholder document store and configure the parameters on the UI. Then use the placeholder as the base for adding new document loader or creating new document store.
{% endhint %}

#### Scenario 4: Create new document store for every upsert

<figure><img src="/files/fpeAq4XMLT8quvFbVwjx" alt="" width="533"><figcaption></figcaption></figure>

{% hint style="success" %}
**`createNewDocStore`** and **`docStore`** are both required in the request body for this scenario.
{% endhint %}

{% tabs %}
{% tab title="Python" %}

```python
import requests
import json

DOC_STORE_ID = "your_doc_store_id"
DOC_LOADER_ID = "your_doc_loader_id"
API_URL = f"http://localhost:3000/api/v1/document-store/upsert/{DOC_STORE_ID}"
API_KEY = "your_api_key_here"

form_data = {
    "files": ('my-another-file.pdf', open('my-another-file.pdf', 'rb'))
}

body_data = {
    "docId": DOC_LOADER_ID,
    "createNewDocStore": True,
    "docStore": json.dumps({"name":"My NEW Doc Store"})
}

headers = {
    "Authorization": f"Bearer {BEARER_TOKEN}"
}

def query(form_data):
    response = requests.post(API_URL, files=form_data, data=body_data, headers=headers)
    print(response)
    return response.json()

output = query(form_data)
print(output)
```

{% endtab %}

{% tab title="Javascript" %}

```javascript
const DOC_STORE_ID = "your_doc_store_id";
const DOC_LOADER_ID = "your_doc_loader_id";

let formData = new FormData();
formData.append("files", input.files[0]);
formData.append("docId", DOC_LOADER_ID);
formData.append("createNewDocStore", true);
formData.append("docStore", JSON.stringify({ "name": "My NEW Doc Store" }));

async function query(formData) {
    const response = await fetch(
        `http://localhost:3000/api/v1/document-store/upsert/${DOC_STORE_ID}`,
        {
            method: "POST",
            headers: {
                "Authorization": "Bearer <your_api_key_here>"
            },
            body: formData
        }
    );
    const result = await response.json();
    return result;
}

query(formData).then((response) => {
    console.log(response);
});
```

{% endtab %}
{% endtabs %}

#### Q: Where to find Document Store ID and Document Loader ID?

A: You can find the respective IDs from the URL.

<figure><img src="/files/o24TSlpyWhTNzikAT6Kd" alt=""><figcaption></figcaption></figure>

#### Q: Where can I find the available configs to override?

A: You can find the available configs from the **View API** button on each document loader:

<figure><img src="/files/eYvj72wl2rf9TFvpqCsa" alt=""><figcaption></figcaption></figure>

<figure><img src="/files/J9ucaWvWJgk3dJCR8adE" alt=""><figcaption></figcaption></figure>

For each upsert, there are 5 elements involved:

* **`loader`**
* **`splitter`**
* **`embedding`**
* **`vectorStore`**
* **`recordManager`**

You can override existing configuration with the **`config`** body of the element. For example, using the screenshot above, you can create a new document loader with a new **`url`**:

{% tabs %}
{% tab title="Python" %}

```python
import requests

API_URL = "http://localhost:3000/api/v1/document-store/upsert/<storeId>"

def query(payload):
    response = requests.post(API_URL, json=payload)
    return response.json()

output = query({
    "docId": <docLoaderId>,
    # override existing configuration
    "loader": {
        "config": {
            "url": "https://new-url.com"
        }
    }
})
print(output)
```

{% endtab %}

{% tab title="Javascript" %}

```javascript
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/document-store/upsert/<storeId>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            },
            body: JSON.stringify(data)
        }
    );
    const result = await response.json();
    return result;
}

query({
    "docId": <docLoaderId>,
    // override existing configuration
    "loader": {
        "config": {
            "url": "https://new-url.com"
        }
    }
}).then((response) => {
    console.log(response);
});
```

{% endtab %}
{% endtabs %}

What if the loader has file upload? Yes, you guessed it right, we have to use form data as body!

Using the image below as an example, we can override the **`usage`** parameter of the PDF File Loader like so:

<figure><img src="/files/juoqyscZMBA6bdg0CUyz" alt=""><figcaption></figcaption></figure>

{% tabs %}
{% tab title="Python" %}

```python
import requests
import json

API_URL = "http://localhost:3000/api/v1/document-store/upsert/<storeId>"
API_KEY = "your_api_key_here"

form_data = {
    "files": ('my-another-file.pdf', open('my-another-file.pdf', 'rb'))
}

override_loader_config = {
    "config": {
        "usage": "perPage"
    }
}

body_data = {
    "docId": <docLoaderId>,
    "loader": json.dumps(override_loader_config) # Override existing configuration
}

headers = {
    "Authorization": f"Bearer {BEARER_TOKEN}"
}

def query(form_data):
    response = requests.post(API_URL, files=form_data, data=body_data, headers=headers)
    print(response)
    return response.json()

output = query(form_data)
print(output)
```

{% endtab %}

{% tab title="Javascript" %}

```javascript
const DOC_STORE_ID = "your_doc_store_id";
const DOC_LOADER_ID = "your_doc_loader_id";

const overrideLoaderConfig = {
    "config": {
        "usage": "perPage"
    }
}

let formData = new FormData();
formData.append("files", input.files[0]);
formData.append("docId", DOC_LOADER_ID);
formData.append("loader", JSON.stringify(overrideLoaderConfig));

async function query(formData) {
    const response = await fetch(
        `http://localhost:3000/api/v1/document-store/upsert/${DOC_STORE_ID}`,
        {
            method: "POST",
            headers: {
                "Authorization": "Bearer <your_api_key_here>"
            },
            body: formData
        }
    )
    const result = await response.json();
    return result;
}

query(formData).then((response) => {
    console.log(response);
});e
```

{% endtab %}
{% endtabs %}

#### Q: When to use Form Data vs JSON as the body of API request?

A: For [Document Loaders](/integrations/langchain/document-loaders) that have File Upload functionality, such as PDF, DOCX, TXT, etc, body must be sent as Form Data.

{% hint style="warning" %}
Make sure the sent file type is compatible with the expected file type from document loader.

For example, if a [PDF File Loader](/integrations/langchain/document-loaders/pdf-file) is being used, you should only send **.pdf** files.

To avoid having separate loaders for different file types, we recommend to use [File Loader](/integrations/langchain/document-loaders/file-loader)
{% endhint %}

{% tabs %}
{% tab title="Python API" %}

```python
import requests
import json

API_URL = "http://localhost:3000/api/v1/document-store/upsert/<storeId>"

# use form data to upload files
form_data = {
    "files": ('my-another-file.pdf', open('my-another-file.pdf', 'rb'))
}

body_data = {
    "docId": <docId>
}

def query(form_data):
    response = requests.post(API_URL, files=form_data, data=body_data)
    print(response)
    return response.json()

output = query(form_data)
print(output)
```

{% endtab %}

{% tab title="Javascript API" %}

```javascript
// use FormData to upload files
let formData = new FormData();
formData.append("files", input.files[0]);
formData.append("docId", <docId>);

async function query(formData) {
    const response = await fetch(
        "http://localhost:3000/api/v1/document-store/upsert/<storeId>",
        {
            method: "POST",
            body: formData
        }
    );
    const result = await response.json();
    return result;
}

query(formData).then((response) => {
    console.log(response);
});
```

{% endtab %}
{% endtabs %}

For other [Document Loaders](https://docs.flowiseai.com/integrations/langchain/document-loaders) nodes without Upload File functionality, the API body is in **JSON** format:

{% tabs %}
{% tab title="Python API" %}

```python
import requests

API_URL = "http://localhost:3000/api/v1/document-store/upsert/<storeId>"

def query(payload):
    response = requests.post(API_URL, json=payload)
    return response.json()

output = query({
    "docId": <docId>
})
print(output)
```

{% endtab %}

{% tab title="Javascript API" %}

```javascript
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/document-store/upsert/<storeId>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            },
            body: JSON.stringify(data)
        }
    );
    const result = await response.json();
    return result;
}

query({
    "docId": <docId>
}).then((response) => {
    console.log(response);
});
```

{% endtab %}
{% endtabs %}

#### Q: Can I add new metadata?

A: You can provide new metadata by passing the **`metadata`** inside the body request:

```json
{
    "docId": <doc-id>,
    "metadata": {
        "source: "abc"
    }
}
```

### Refresh API

Often times you might want to re-process every documents loaders within document store to fetch the latest data, and upsert to vector store, to keep everything in sync. This can be done via Refresh API:

{% tabs %}
{% tab title="Python API" %}

```python
import requests

API_URL = "http://localhost:3000/api/v1/document-store/refresh/<storeId>"

def query():
    response = requests.post(API_URL)
    return response.json()

output = query()
print(output)
```

{% endtab %}

{% tab title="Javascript API" %}

```javascript
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/document-store/refresh/<storeId>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            }
        }
    );
    const result = await response.json();
    return result;
}

query().then((response) => {
    console.log(response);
});
```

{% endtab %}
{% endtabs %}

You can also override existing configuration of specific document loader:

{% tabs %}
{% tab title="Python API" %}

```python
import requests

API_URL = "http://localhost:3000/api/v1/document-store/refresh/<storeId>"

def query(payload):
    response = requests.post(API_URL, json=payload)
    return response.json()

output = query(
{
    "items": [
        {
            "docId": <docId>,
            "splitter": {
                "name": "recursiveCharacterTextSplitter",
                "config": {
                    "chunkSize": 2000,
                    "chunkOverlap": 100
                }
            }
        }
    ]
}
)
print(output)
```

{% endtab %}

{% tab title="Javascript API" %}

```javascript
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/document-store/refresh/<storeId>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            },
            body: JSON.stringify(data)
        }
    );
    const result = await response.json();
    return result;
}

query({
    "items": [
        {
            "docId": <docId>,
            "splitter": {
                "name": "recursiveCharacterTextSplitter",
                "config": {
                    "chunkSize": 2000,
                    "chunkOverlap": 100
                }
            }
        }
    ]
}).then((response) => {
    console.log(response);
});
```

{% endtab %}
{% endtabs %}

## 11. Summary

We started by creating a Document Store to organize the LibertyGuard Deluxe Homeowners Policy data. This data was then prepared by uploading, chunking, processing, and upserting it, making it ready for our RAG system.

**Advantages of the Document Store:**

Document Stores offer several benefits for managing and preparing data for Retrieval Augmented Generation (RAG) systems:

* **Organization and Management:** They provide a central location for storing, managing, and preparing your data.
* **Data Quality:** The chunking process helps structure data for accurate retrieval and analysis.
* **Flexibility:** Document Stores allow for refining and adjusting data as needed, improving the accuracy and relevance of your RAG system.

## 12. Video Tutorials

### RAG Like a Boss - Flowise Document Store Tutorial

In this video, [Leon](https://youtube.com/@leonvanzyl) provides a step by step tutorial on using Document Stores to easily manage your RAG knowledge bases in FlowiseAI.

{% embed url="<https://youtu.be/PLuSfAkOHOA>" %}


# Upsertion

Upsert refers to the process of uploading and processing documents into vector stores, forming the foundation of Retrieval Augmented Generation (RAG) systems.

There are two fundamental ways to upsert data into vector store:

* [Document Stores (Recommended)](/using-flowise/document-stores)
* Chatflow Upsert

We highly recommend using Document Stores as it provides a unified interface to help with the RAG pipelines - retrieveing data from different sources, chunking strategy, upserting to vector database, syncing with updated data.

In this guide, we are going to cover another method - Chatflow Upsert. This is an older method prior to Document Stores.

For details, see the [Vector Upsert Endpoint API Reference](/api-reference/vector-upsert).

## Understanding the upserting process

Chatflow allows you to create a flow that can do both upserting and RAG querying process, both can be run idenpendently.

<figure><img src="/files/SSMLM8yC098wPsmq8QdH" alt=""><figcaption><p>Upsert vs. RAG</p></figcaption></figure>

## Setup

For an upsert process to work, we would need to create an **upserting flow** with 5 different nodes:

1. Document Loader
2. Text Splitter
3. Embedding Model
4. Vector Store
5. Record Manager (Optional)

All of the elements have been covered in [Document Stores](/using-flowise/document-stores), refer there for more details.

Once flow is setup correctly, there will be a green button at the top right that allows user to start the upsert process.

<figure><img src="/files/dMwftp0V6DXWzc9PyfkS" alt=""><figcaption></figcaption></figure>

<figure><img src="/files/AnzOL2ELgO8DjmcJL1kr" alt="" width="563"><figcaption></figcaption></figure>

The upsert process can also be carried out via API:

<figure><img src="/files/ZmSVQ0a0A6n8H1Q9XDhk" alt="" width="563"><figcaption></figcaption></figure>

## Base URL and Authentication

**Base URL**: `http://localhost:3000` (or your Flowise instance URL)

**Endpoint**: `POST /api/v1/vector/upsert/:id`

**Authentication**: Refer [Authentication for Flows](/configuration/authorization/chatflow-level)

## Request Methods

The API supports two different request methods depending on your chatflow configuration:

#### 1. Form Data (File Upload)

Used when your chatflow contains Document Loaders with file upload capability.

#### 2. JSON Body (No File Upload)

Used when your chatflow uses Document Loaders that don't require file uploads (e.g., web scrapers, database connectors).

{% hint style="warning" %}
To override any node configurations such as files, metadata, etc., you must explicitly enable that option.
{% endhint %}

<figure><img src="/files/o3B1rAaAu1ltSTxnsWya" alt=""><figcaption></figcaption></figure>

### Document Loaders with File Upload

#### Supported Document Types

| Document Loader   | File Types |
| ----------------- | ---------- |
| CSV File          | `.csv`     |
| Docx/Word File    | `.docx`    |
| JSON File         | `.json`    |
| JSON Lines File   | `.jsonl`   |
| PDF File          | `.pdf`     |
| Text File         | `.txt`     |
| Excel File        | `.xlsx`    |
| Powerpoint File   | `.pptx`    |
| File Loader       | Multiple   |
| Unstructured File | Multiple   |

{% hint style="info" %}
**Important**: Ensure the file type matches your Document Loader configuration. For maximum flexibility, consider using the File Loader which supports multiple file types.
{% endhint %}

#### Request Format (Form Data)

When uploading files, use `multipart/form-data` instead of JSON:

#### Examples

{% tabs %}
{% tab title="Python" %}

```python
import requests
import os

def upsert_document(chatflow_id, file_path, config=None):
    """
    Upsert a single document to a vector store.
    
    Args:
        chatflow_id (str): The chatflow ID configured for vector upserting
        file_path (str): Path to the file to upload
        return_source_docs (bool): Whether to return source documents in response
        config (dict): Optional configuration overrides
    
    Returns:
        dict: API response containing upsert results
    """
    url = f"http://localhost:3000/api/v1/vector/upsert/{chatflow_id}"
    
    # Prepare file data
    files = {
        'files': (os.path.basename(file_path), open(file_path, 'rb'))
    }
    
    # Prepare form data
    data = {}
    
    # Add configuration overrides if provided
    if config:
        data['overrideConfig'] = str(config).replace("'", '"')  # Convert to JSON string
    
    try:
        response = requests.post(url, files=files, data=data)
        response.raise_for_status()
        
        return response.json()
        
    except requests.exceptions.RequestException as e:
        print(f"Upload failed: {e}")
        return None
    finally:
        # Always close the file
        files['files'][1].close()

# Example usage
result = upsert_document(
    chatflow_id="your-chatflow-id",
    file_path="documents/knowledge_base.pdf",
    config={
        "chunkSize": 1000,
        "chunkOverlap": 200
    }
)

if result:
    print(f"Successfully upserted {result.get('numAdded', 0)} chunks")
    if result.get('sourceDocuments'):
        print(f"Source documents: {len(result['sourceDocuments'])}")
else:
    print("Upload failed")
```

{% endtab %}

{% tab title="Javascript (Browser)" %}

```javascript
class VectorUploader {
    constructor(baseUrl = 'http://localhost:3000') {
        this.baseUrl = baseUrl;
    }
    
    async upsertDocument(chatflowId, file, config = {}) {
        /**
         * Upload a file to vector store from browser
         * @param {string} chatflowId - The chatflow ID
         * @param {File} file - File object from input element
         * @param {Object} config - Optional configuration
         */
        
        const formData = new FormData();
        formData.append('files', file);
        
        if (config.overrideConfig) {
            formData.append('overrideConfig', JSON.stringify(config.overrideConfig));
        }
        
        try {
            const response = await fetch(`${this.baseUrl}/api/v1/vector/upsert/${chatflowId}`, {
                method: 'POST',
                body: formData
            });
            
            if (!response.ok) {
                throw new Error(`HTTP error! status: ${response.status}`);
            }
            
            const result = await response.json();
            return result;
            
        } catch (error) {
            console.error('Upload failed:', error);
            throw error;
        }
    }
    
  
}

// Example usage in browser
const uploader = new VectorUploader();

// Single file upload
document.getElementById('fileInput').addEventListener('change', async function(e) {
    const file = e.target.files[0];
    if (file) {
        try {
            const result = await uploader.upsertDocument(
                'your-chatflow-id',
                file,
                {
                    overrideConfig: {
                        chunkSize: 1000,
                        chunkOverlap: 200
                    }
                }
            );
            
            console.log('Upload successful:', result);
            alert(`Successfully processed ${result.numAdded || 0} chunks`);
            
        } catch (error) {
            console.error('Upload failed:', error);
            alert('Upload failed: ' + error.message);
        }
    }
});
```

{% endtab %}

{% tab title="Javascript (Node.js)" %}

```javascript
const fs = require('fs');
const path = require('path');
const FormData = require('form-data');
const fetch = require('node-fetch');

class NodeVectorUploader {
    constructor(baseUrl = 'http://localhost:3000') {
        this.baseUrl = baseUrl;
    }
    
    async upsertDocument(chatflowId, filePath, config = {}) {
        /**
         * Upload a file to vector store from Node.js
         * @param {string} chatflowId - The chatflow ID
         * @param {string} filePath - Path to the file
         * @param {Object} config - Optional configuration
         */
        
        if (!fs.existsSync(filePath)) {
            throw new Error(`File not found: ${filePath}`);
        }
        
        const formData = new FormData();
        const fileStream = fs.createReadStream(filePath);
        
        formData.append('files', fileStream, {
            filename: path.basename(filePath),
            contentType: this.getMimeType(filePath)
        });
        
        if (config.overrideConfig) {
            formData.append('overrideConfig', JSON.stringify(config.overrideConfig));
        }
        
        try {
            const response = await fetch(`${this.baseUrl}/api/v1/vector/upsert/${chatflowId}`, {
                method: 'POST',
                body: formData,
                headers: formData.getHeaders()
            });
            
            if (!response.ok) {
                const errorText = await response.text();
                throw new Error(`HTTP ${response.status}: ${errorText}`);
            }
            
            return await response.json();
            
        } catch (error) {
            console.error('Upload failed:', error);
            throw error;
        }
    }

    getMimeType(filePath) {
        const ext = path.extname(filePath).toLowerCase();
        const mimeTypes = {
            '.pdf': 'application/pdf',
            '.txt': 'text/plain',
            '.docx': 'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
            '.csv': 'text/csv',
            '.json': 'application/json'
        };
        return mimeTypes[ext] || 'application/octet-stream';
    }
}

// Example usage
async function main() {
    const uploader = new NodeVectorUploader();
    
    try {
        // Single file upload
        const result = await uploader.upsertDocument(
            'your-chatflow-id',
            './documents/manual.pdf',
            {
                overrideConfig: {
                    chunkSize: 1200,
                    chunkOverlap: 100
                }
            }
        );
        
        console.log('Single file upload result:', result); 
    } catch (error) {
        console.error('Process failed:', error);
    }
}

// Run if this file is executed directly
if (require.main === module) {
    main();
}

module.exports = { NodeVectorUploader };
```

{% endtab %}

{% tab title="cURL" %}

```bash
# Basic file upload with cURL
curl -X POST "http://localhost:3000/api/v1/vector/upsert/your-chatflow-id" \
  -F "files=@documents/knowledge_base.pdf"

# File upload with configuration override
curl -X POST "http://localhost:3000/api/v1/vector/upsert/your-chatflow-id" \
  -F "files=@documents/manual.pdf" \
  -F 'overrideConfig={"chunkSize": 1000, "chunkOverlap": 200}'

# Upload with custom headers for authentication (if configured)
curl -X POST "http://localhost:3000/api/v1/vector/upsert/your-chatflow-id" \
  -H "Authorization: Bearer your-api-token" \
  -F "files=@documents/faq.txt" \
  -F 'overrideConfig={"chunkSize": 800, "chunkOverlap": 150}'
```

{% endtab %}
{% endtabs %}

### Document Loaders without File Upload

For Document Loaders that don't require file uploads (e.g., web scrapers, database connectors, API integrations), use JSON format similar to the Prediction API.

#### Examples

{% tabs %}
{% tab title="Python" %}

```python
import requests
from typing import Dict, Any, Optional

def upsert(chatflow_id: str, config: Optional[Dict[str, Any]] = None) -> Optional[Dict[str, Any]]:
    """
    Trigger vector upserting for chatflows that don't require file uploads.
    
    Args:
        chatflow_id: The chatflow ID configured for vector upserting
        config: Optional configuration overrides
    
    Returns:
        API response containing upsert results
    """
    url = f"http://localhost:3000/api/v1/vector/upsert/{chatflow_id}"
    
    payload = {
        "overrideConfig": config
    }
    
    headers = {
        "Content-Type": "application/json"
    }
    
    try:
        response = requests.post(url, json=payload, headers=headers, timeout=300)
        response.raise_for_status()
        
        return response.json()
        
    except requests.exceptions.RequestException as e:
        print(f"Upsert failed: {e}")
        return None

result = upsert(
    chatflow_id="chatflow-id",
    config={
        "chunkSize": 800,
        "chunkOverlap": 100,
    }
)

if result:
    print(f"Upsert completed: {result.get('numAdded', 0)} chunks added")
```

{% endtab %}

{% tab title="JavaScript" %}

```javascript
class NoFileUploader {
    constructor(baseUrl = 'http://localhost:3000') {
        this.baseUrl = baseUrl;
    }
    
    async upsertWithoutFiles(chatflowId, config = {}) {
        /**
         * Trigger vector upserting for flows that don't need file uploads
         * @param {string} chatflowId - The chatflow ID
         * @param {Object} config - Configuration overrides
         */
        
        const payload = {
            overrideConfig: config
        };
        
        try {
            const response = await fetch(`${this.baseUrl}/api/v1/vector/upsert/${chatflowId}`, {
                method: 'POST',
                headers: {
                    'Content-Type': 'application/json',
                },
                body: JSON.stringify(payload)
            });
            
            if (!response.ok) {
                throw new Error(`HTTP error! status: ${response.status}`);
            }
            
            return await response.json();
            
        } catch (error) {
            console.error('Upsert failed:', error);
            throw error;
        }
    }
    
    async scheduledUpsert(chatflowId, interval = 3600000) {
        /**
         * Set up scheduled upserting for dynamic content sources
         * @param {string} chatflowId - The chatflow ID
         * @param {number} interval - Interval in milliseconds (default: 1 hour)
         */
        
        console.log(`Starting scheduled upsert every ${interval/1000} seconds`);
        
        const performUpsert = async () => {
            try {
                console.log('Performing scheduled upsert...');
                
                const result = await this.upsertWithoutFiles(chatflowId, {
                    addMetadata: {
                        scheduledUpdate: true,
                        timestamp: new Date().toISOString()
                    }
                });
                
                console.log(`Scheduled upsert completed: ${result.numAdded || 0} chunks processed`);
                
            } catch (error) {
                console.error('Scheduled upsert failed:', error);
            }
        };
        
        // Perform initial upsert
        await performUpsert();
        
        // Set up recurring upserts
        return setInterval(performUpsert, interval);
    }
}

// Example usage
const uploader = new NoFileUploader();

async function performUpsert() {
    try {
        const result = await uploader.upsertWithoutFiles(
            'chatflow-id',
            {
                chunkSize: 800,
                chunkOverlap: 100
            }
        );
        
        console.log('Upsert result:', result);
        
    } catch (error) {
        console.error('Upsert failed:', error);
    }
}

// One time upsert
await performUpsert();

// Set up scheduled updates (every 30 minutes)
const schedulerHandle = await uploader.scheduledUpsert(
    'dynamic-content-chatflow-id',
    30 * 60 * 1000
);

// To stop scheduled updates later:
// clearInterval(schedulerHandle);
```

{% endtab %}

{% tab title="cURL" %}

```bash
# Basic upsert with cURL
curl -X POST "http://localhost:3000/api/v1/vector/upsert/your-chatflow-id" \
  -H "Content-Type: application/json"

# Upsert with configuration override
curl -X POST "http://localhost:3000/api/v1/vector/upsert/your-chatflow-id" \
  -H "Content-Type: application/json" \
  -d '{
    "overrideConfig": {
      "returnSourceDocuments": true
    }
  }'
  
# Upsert with custom headers for authentication (if configured)
curl -X POST "http://localhost:3000/api/v1/vector/upsert/your-chatflow-id" \
  -H "Authorization: Bearer your-api-token" \
  -H "Content-Type: application/json"
```

{% endtab %}
{% endtabs %}

## Response Fields

| Field        | Type   | Description                                                 |
| ------------ | ------ | ----------------------------------------------------------- |
| `numAdded`   | number | Number of new chunks added to vector store                  |
| `numDeleted` | number | Number of chunks deleted (if using Record Manager)          |
| `numSkipped` | number | Number of chunks skipped (if using Record Manager)          |
| `numUpdated` | number | Number of existing chunks updated (if using Record Manager) |

## Optimization Strategies

### 1. Batch Processing Strategies

```python
def intelligent_batch_processing(files: List[str], chatflow_id: str) -> Dict[str, Any]:
    """Process files in optimized batches based on size and type."""
    
    # Group files by size and type
    small_files = []
    large_files = []
    
    for file_path in files:
        file_size = os.path.getsize(file_path)
        if file_size > 5_000_000:  # 5MB
            large_files.append(file_path)
        else:
            small_files.append(file_path)
    
    results = {'successful': [], 'failed': [], 'totalChunks': 0}
    
    # Process large files individually
    for file_path in large_files:
        print(f"Processing large file: {file_path}")
        # Individual processing with custom config
        # ... implementation
    
    # Process small files in batches
    batch_size = 5
    for i in range(0, len(small_files), batch_size):
        batch = small_files[i:i + batch_size]
        print(f"Processing batch of {len(batch)} small files")
        # Batch processing
        # ... implementation
    
    return results
```

### 2. Metadata Optimization

```python
import requests
import os
from datetime import datetime
from typing import Dict, Any

def upsert_with_optimized_metadata(chatflow_id: str, file_path: str, 
                                 department: str = None, category: str = None) -> Dict[str, Any]:
    """
    Upsert document with automatically optimized metadata.
    """
    url = f"http://localhost:3000/api/v1/vector/upsert/{chatflow_id}"
    
    # Generate optimized metadata
    custom_metadata = {
        'department': department or 'general',
        'category': category or 'documentation',
        'indexed_date': datetime.now().strftime('%Y-%m-%d'),
        'version': '1.0'
    }
    
    optimized_metadata = optimize_metadata(file_path, custom_metadata)
    
    # Prepare request
    files = {'files': (os.path.basename(file_path), open(file_path, 'rb'))}
    data = {
        'overrideConfig': str({
            'metadata': optimized_metadata
        }).replace("'", '"')
    }
    
    try:
        response = requests.post(url, files=files, data=data)
        response.raise_for_status()
        return response.json()
    finally:
        files['files'][1].close()

# Example usage with different document types
results = []

# Technical documentation
tech_result = upsert_with_optimized_metadata(
    chatflow_id="tech-docs-chatflow",
    file_path="docs/api_reference.pdf",
    department="engineering",
    category="technical_docs"
)
results.append(tech_result)

# HR policies
hr_result = upsert_with_optimized_metadata(
    chatflow_id="hr-docs-chatflow", 
    file_path="policies/employee_handbook.pdf",
    department="human_resources",
    category="policies"
)
results.append(hr_result)

# Marketing materials
marketing_result = upsert_with_optimized_metadata(
    chatflow_id="marketing-chatflow",
    file_path="marketing/product_brochure.pdf", 
    department="marketing",
    category="promotional"
)
results.append(marketing_result)

for i, result in enumerate(results):
    print(f"Upload {i+1}: {result.get('numAdded', 0)} chunks added")
```

## Troubleshooting

1. **File Upload Fails**
   * Check file format compatibility
   * Verify file size limits
2. **Processing Timeout**
   * Increase request timeout
   * Break large files into smaller parts
   * Optimize chunk size
3. **Vector Store Errors**
   * Check vector store connectivity
   * Verify embedding model dimension compatibility


# Analytic

Learn how to analyze and troubleshoot your chatflows and agentflows

***

Flowise provides step by step tracing for [Agentflow V2](/using-flowise/agentflowv2):

<figure><img src="/files/yckGEQGDRdIowWibNE34" alt=""><figcaption></figcaption></figure>

Besides, there are also several analytic providers Flowise integrates with:

* [LunaryAI](https://lunary.ai/)
* [Langsmith](https://smith.langchain.com/)
* [Langfuse](https://langfuse.com/)
* [LangWatch](https://langwatch.ai/)
* [Arize](https://arize.com/)
* [Phoenix](https://phoenix.arize.com/)
* [Opik](https://www.comet.com/site/products/opik/)

## Setup

1. At the top right corner of your Chatflow or Agentflow, click **Settings** > **Configuration**

<figure><img src="/files/SHfE0tY1BkbFufIz9pzc" alt="Screenshot of user clicking in the configuration menu" width="375"><figcaption></figcaption></figure>

2. Then go to the Analyse Chatflow section

<figure><img src="/files/a0iOf2E3JodiexG9o3vv" alt="Screenshot of the Analyse Chatflow section with the different Analytics providers"><figcaption></figcaption></figure>

3. You will see a list of providers, along with their configuration fields

<figure><img src="/files/2I3ooO8i8Bzwcu5YeRgY" alt="Screenshot of an analytics provider with credentials fields expanded"><figcaption></figcaption></figure>

4. Fill in the credentials and other configuration details, then turn the provider **ON**. Click Save.

<figure><img src="/files/vxPSSU4oPD9BJYlkDFih" alt="Screenshot of analytics providers enabled"><figcaption></figcaption></figure>

## API

Once the analytic has been turned ON from the UI, you can override or provide additional configuration in the body of the [Prediction API](https://github.com/FlowiseAI/FlowiseDocs/blob/main/en/using-flowise/analytics/api.md#prediction-api):

```json
{
  "question": "hi there",
  "overrideConfig": {
    "analytics": {
      "langFuse": {
        // langSmith, langFuse, lunary, langWatch, opik
        "userId": "user1"
      }
    }
  }
}
```


# Arize

Learn how to setup Arize to analyze and troubleshoot your chatflows and agentflows

***

[Arize AI](https://docs.arize.com/arize) is a production-grade observability platform for monitoring, debugging, and improving LLM applications and AI Agents at scale. For a free, open-source alternative, explore [Phoenix](https://docs.flowiseai.com/using-flowise/analytics/phoenix).

## Setup

1. At the top right corner of your Chatflow or Agentflow, click **Settings** > **Configuration**

<figure><img src="/files/SHfE0tY1BkbFufIz9pzc" alt="Screenshot of user clicking in the configuration menu" width="375"><figcaption></figcaption></figure>

2. Then go to the Analyse Chatflow section

<figure><img src="/files/a0iOf2E3JodiexG9o3vv" alt="Screenshot of the Analyse Chatflow section with the different Analytics providers"><figcaption></figcaption></figure>

3. You will see a list of providers, along with their configuration fields. Click on Arize.

<figure><img src="/files/I0aUl4jfusfeLjNkZsns" alt="Screenshot of an analytics provider with credentials fields expanded"><figcaption></figcaption></figure>

4. Create credentials for Arize. Refer to the [official guide](https://docs.arize.com/arize/llm-tracing/quickstart-llm#get-your-api-keys) on how to get the Arize API key.

<figure><img src="/files/doSxNbNxVDWLo1mzIoBU" alt="Screenshot of analytics providers enabled"><figcaption></figcaption></figure>

5. Fill in other configuration details, then turn the provider **ON**

<figure><img src="/files/MNYVgnZLE7dWCsxTUUQz" alt="Screenshot of analytics providers enabled"><figcaption></figcaption></figure>


# LangWatch

Learn how to setup LangWatch to analyze and troubleshoot your chatflows and agentflows

***

[Langwatch](https://langwatch.ai) is a production-grade observability and LLMOps platform designed to monitor, debug, and enhance LLM applications and AI Agents at scale.

## Setup

1. At the top right corner of your Chatflow or Agentflow, click **Settings** > **Configuration**

<figure><img src="/files/SHfE0tY1BkbFufIz9pzc" alt="Screenshot of user clicking in the configuration menu" width="375"><figcaption></figcaption></figure>

2. Then go to the Analyse Chatflow section

<figure><img src="/files/a0iOf2E3JodiexG9o3vv" alt="Screenshot of the Analyse Chatflow section with the different Analytics providers"><figcaption></figcaption></figure>

3. You will see a list of providers, along with their configuration fields. Click on LangWatch.
4. If you haven't already, sign up for a free account [here](https://app.langwatch.ai) to get your API key.
5. Fill in the configuration details, then turn the provider **ON** and click **Save**
6. You can now use LangWatch to analyze and troubleshoot your chatflows and agentflows. Refer to the [official guide](https://docs.langwatch.ai) for more details.


# Langfuse

[Langfuse](https://langfuse.com) is an open source LLM engineering platform that helps teams trace API calls, monitor performance, and debug issues in their AI applications.

With the native integration, you can use Flowise to quickly create complex LLM applications in no-code and then use Langfuse to monitor and improve them.

The integration supports all use cases of Flowise, including: interactively in the UI, API, and embeds.

{% embed url="<https://youtu.be/iFsSW6HHoa0>" %}

You can optionally add `release` to tag the current version of the flow. You usually don't need to change the other options.


# Lunary

[Lunary](https://lunary.ai/) is a monitoring and analytics platform for LLM chatbots.

Flowise has partnered with Lunary to provide a complete integration supporting user tracing, feedback tracking, conversation replays and detailed LLM analytics.

Flowise users can get a 30% discount on the Teams Plan using code `FLOWISEFRIENDS` during checkout.

Read more on how to setup Lunary with Flowise [here](https://lunary.ai/docs/integrations/flowise).


# Opik

Learn how to setup Opik to analyze and troubleshoot your chatflows and agentflows

***

## Setup

1. At the top right corner of your Chatflow or Agentflow, click **Settings** > **Configuration**

<figure><img src="/files/SHfE0tY1BkbFufIz9pzc" alt="Screenshot of user clicking in the configuration menu" width="375"><figcaption></figcaption></figure>

2. Then go to the Analyse Chatflow section

<figure><img src="/files/a0iOf2E3JodiexG9o3vv" alt="Screenshot of the Analyse Chatflow section with the different Analytics providers"><figcaption></figcaption></figure>

3. You will see a list of providers, along with their configuration fields. Click on Opik.

<figure><img src="/files/15XP0xbd672SU2xuuANc" alt="Screenshot of an analytics provider with credentials fields expanded"><figcaption></figcaption></figure>

4. Create credentials for Opik. Refer to the [official guide](https://www.comet.com/docs/opik/tracing/sdk_configuration) on how to get the Opik API key.

<figure><img src="/files/x32QcZKhGrPM5qJw6iKk" alt="Screenshot of analytics providers enabled"><figcaption></figcaption></figure>

5. Fill in other configuration details, then turn the provider **ON**

<figure><img src="/files/MPlAxrQEdWlEwCSekyA6" alt="Screenshot of analytics providers enabled"><figcaption></figcaption></figure>

Now you can analyze your chatflows and agentflows using Opik UI:

<figure><img src="/files/1wBEnqQckxIQK3rRTVCu" alt="Screenshot of Opik UI"><figcaption></figcaption></figure>


# Phoenix

Learn how to setup Phoenix to analyze and troubleshoot your chatflows and agentflows

***

[Phoenix](https://docs.arize.com/phoenix/self-hosting) is an open-source observability tool designed for experimentation, evaluation, and troubleshooting of AI and LLM applications. It can be access in its [Cloud](https://app.phoenix.arize.com/login) form online, or self-hosted and run on your own machine or server.

## Setup

1. At the top right corner of your Chatflow or Agentflow, click **Settings** > **Configuration**

<figure><img src="/files/SHfE0tY1BkbFufIz9pzc" alt="Screenshot of user clicking in the configuration menu" width="375"><figcaption></figcaption></figure>

2. Then go to the Analyse Chatflow section

<figure><img src="/files/a0iOf2E3JodiexG9o3vv" alt="Screenshot of the Analyse Chatflow section with the different Analytics providers"><figcaption></figcaption></figure>

3. You will see a list of providers, along with their configuration fields. Click on Phoenix.

<figure><img src="/files/UyGo8P5UlCY7bh2xQ3i5" alt="Screenshot of an analytics provider with credentials fields expanded"><figcaption></figcaption></figure>

4. Create credentials for Phoenix. Refer to the [official guide](https://docs.arize.com/phoenix/environments) on how to get the Phoenix API key.

<figure><img src="/files/JJvufGNyAxPOYmCj8NVo" alt="Screenshot of analytics providers enabled"><figcaption></figcaption></figure>

5. Fill in other configuration details, then turn the provider **ON**. Click Save.

<figure><img src="/files/uUvivp1Rz6020F75uB0c" alt="Screenshot of analytics providers enabled"><figcaption></figcaption></figure>


# Monitoring

Flowise has native support for Prometheus with Grafana and OpenTelemetry. However, only high-level metrics such as API requests, counts of flows/predictions are tracked. Refer [here](https://github.com/FlowiseAI/Flowise/blob/main/packages/server/src/Interface.Metrics.ts#L13) for the lists of counter metrics. For details node by node observability, we recommend using [Analytic](broken://pages/z1V6RsbL6q6hrrswC3e9).

## Prometheus

[Prometheus](https://prometheus.io/) is an open-source monitoring and alerting solution.

Before setting up Prometheus, configure the following env variables in Flowise:

```properties
ENABLE_METRICS=true
METRICS_PROVIDER=prometheus
METRICS_INCLUDE_NODE_METRICS=true
```

### Authentication Setup

The `/api/v1/metrics` endpoint requires API key authentication. You'll need to:

1. Generate an API key following the instructions [here](https://docs.flowiseai.com/configuration/authorization/chatflow-level#api-key)
2. Save the API key to a file accessible by Prometheus (e.g., `/etc/prometheus/api_key.txt`)
3. Configure Prometheus to use bearer token authentication

### Prometheus Configuration

After Prometheus is installed, run it using a configuration file. Flowise provides a default configuration file that can be found [here](https://github.com/FlowiseAI/Flowise/blob/main/metrics/prometheus/prometheus.config.yml).

You'll need to add authentication configuration to your Prometheus config file:

```yaml
scrape_configs:
  - job_name: 'flowise'
    static_configs:
      - targets: ['localhost:3000']
    metrics_path: '/api/v1/metrics'
    authorization:
      type: Bearer
      credentials_file: '/etc/prometheus/api_key.txt'
```

Remember to have Flowise instance also running. You can open browser and navigate to port 9090. From the dashboard, you should be able to see the metric endpoint - `/api/v1/metrics` is now live with authentication.

<figure><img src="/files/oNe45R2dq0MZ6If7VpOA" alt=""><figcaption></figcaption></figure>

The `/api/v1/metrics` endpoint is available for Prometheus to pull metrics from, but requires API key authentication as configured above.

## Grafana

Prometheus collects rich metrics and provides a powerful querying language; Grafana transforms metrics into meaningful visualizations.

Grafana can be installed in various ways. Refer to the [guide](https://grafana.com/docs/grafana/latest/setup-grafana/installation/).

Grafana by default will expose port 9091:

<figure><img src="/files/OiJ2ILD5SiyRWVWKG8GX" alt=""><figcaption></figcaption></figure>

On the left side bar, click Add new connection, and select Prometheus:

<figure><img src="/files/aarMB1hErs4oehSx0P2w" alt=""><figcaption></figcaption></figure>

Since our Prometheus is serving at port 9090:

<figure><img src="/files/Nl6Eg5X0dygriwwnWYlS" alt=""><figcaption></figcaption></figure>

Scroll to the bottom and test the connection:

<figure><img src="/files/KJBBmk3DsO0ESN7MepVn" alt=""><figcaption></figcaption></figure>

Take note of the data source ID shown in the toolbar, we'll need this for creating dashboards:

<figure><img src="/files/puRGs46ppLFL3YC4kHaJ" alt=""><figcaption></figcaption></figure>

Now that connection is added successfully, we can start adding dashboard. From the left side bar, click Dashboards, and Create Dashboard.

Flowise provides 2 template dashboards:

* [grafana.dashboard.app.json.txt](https://github.com/FlowiseAI/Flowise/blob/main/metrics/grafana/grafana.dashboard.app.json.txt): API metrics such as number of chatflows/agentflows, predictions count, tools, assistant, upserted vectors, etc.
* [grafana.dashboard.server.json.txt](https://github.com/FlowiseAI/Flowise/blob/main/metrics/grafana/grafana.dashboard.server.json.txt): metrics of the Flowise node.js instance such as heap, CPU, RAM usage

If you are using templates above, find and replace all occurence of `cds4j1ybfuhogb` with the data source ID you created and saved earlier.

<figure><img src="/files/yNctQc3wRke3HQRW4ko4" alt=""><figcaption></figcaption></figure>

You can also choose to import first then edit the JSON later:

<figure><img src="/files/je9FeT26VpGTnaBVQNo4" alt=""><figcaption></figcaption></figure>

Now, try to perform some actions on the Flowise, you should be able to see the metrics displayed:

<figure><img src="/files/QpFSL0wjf6LOy387f7kf" alt=""><figcaption></figcaption></figure>

<figure><img src="/files/nNSLIPwT3exrbJVW133F" alt=""><figcaption></figcaption></figure>

## OpenTelemetry

[OpenTelemetry](https://opentelemetry.io/) is an open source framework for creating and managing telemetry data. To enable OTel, configure the following env variables in Flowise:

```properties
ENABLE_METRICS=true
METRICS_PROVIDER=open_telemetry
METRICS_INCLUDE_NODE_METRICS=true
METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT=http://localhost:4318/v1/metrics
METRICS_OPEN_TELEMETRY_PROTOCOL=http # http | grpc | proto (default is http)
METRICS_OPEN_TELEMETRY_DEBUG=true
```

Next, we need OpenTelemetry Collector to receive, process and export telemetry data. Flowise provides a [docker compose file](https://github.com/FlowiseAI/Flowise/blob/main/metrics/otel/compose.yaml) which can be used to start the collector container.

```bash
cd Flowise
cd metrics && cd otel
docker compose up -d
```

The collector will be using the [otel.config.yml](https://github.com/FlowiseAI/Flowise/blob/main/metrics/otel/otel.config.yml) file under the same directory for configurations. Currently only [Datadog](https://www.datadoghq.com/) and Prometheus are supported, refer to the [Open Telemetry](https://opentelemetry.io/) documentation to configure different APM tools such as Zipkin, Jeager, New Relic, Splunk and others.

Make sure to replace with the necessary API key for the exporters within the yml file.


# Embed

Learn how to customize and embed our chat widget

***

You can easily add the chat widget to your website. Just copy the provided widget script and paste it anywhere between the `<body>` and `</body>` tags of your HTML file.

<figure><img src="/files/LyKnLuWpAcUcEaMHyrp2" alt=""><figcaption></figcaption></figure>

## Widget Setup

The following video shows how to inject the widget script into any webpage.

{% embed url="<https://github.com/FlowiseAI/Flowise/assets/26460777/c128829a-2d08-4d60-b821-1e41a9e677d0>" %}

## Using Specific Version

You can specify the version of flowise-embed's `web.js` to use. For full list of versions: <https://www.npmjs.com/package/flowise-embed>

```html
<script type="module">
  import Chatbot from 'https://cdn.jsdelivr.net/npm/flowise-embed@<some-version>/dist/web.js';
  Chatbot.init({
    chatflowid: 'your-chatflowid-here',
    apiHost: 'your-apihost-here',
  })
</script>
```

{% hint style="warning" %}
In Flowise **v2.1.0**, we have modified the way streaming works. If your Flowise version is lower than that, you might find your embedded chatbot not able to receive messages.

You can either update Flowise to **v2.1.0** and above

Or, if for some reason you prefer not to update Flowise, you can specify the latest **v1.x.x** version of [Flowise-Embed](https://www.npmjs.com/package/flowise-embed?activeTab=versions). Last maintained `web.js` version is **v1.3.14.**

For instance:

`https://cdn.jsdelivr.net/npm/flowise-embed@1.3.14/dist/web.js`
{% endhint %}

## Chatflow Config

You can pass `chatflowConfig` JSON object to override existing configuration. This is the same as [https://github.com/FlowiseAI/FlowiseDocs/blob/main/en/using-flowise/broken-reference/README.md](https://github.com/FlowiseAI/FlowiseDocs/blob/main/en/using-flowise/broken-reference/README.md "mention") in API.

```html
<script type="module">
  import Chatbot from 'https://cdn.jsdelivr.net/npm/flowise-embed/dist/web.js';
  Chatbot.init({
    chatflowid: 'your-chatflowid-here',
    apiHost: 'your-apihost-here',
    chatflowConfig: {
      "sessionId": "123",
      "returnSourceDocuments": true
    }
  })
</script>
```

## Observer Config

This allows you to execute code in parent based upon signal observations within the chatbot.

```html
<script type="module">
  import Chatbot from 'https://cdn.jsdelivr.net/npm/flowise-embed/dist/web.js';
  Chatbot.init({
    chatflowid: 'your-chatflowid-here',
    apiHost: 'your-apihost-here',
    observersConfig: {
      // User input has changed
      observeUserInput: (userInput) => {
        console.log({ userInput });
      },
      // The bot message stack has changed
      observeMessages: (messages) => {
        console.log({ messages });
      },
      // The bot loading signal changed
      observeLoading: (loading) => {
        console.log({ loading });
      },
    },
  })
</script>
```

## Theme

You can change the full appearance of the embedded chatbot and enable functionalities like tooltips, disclaimers, custom welcome messages, and more using the theme property. This allows you to deeply customize the look and feel of the widget, including:

* **Button:** Position, size, color, icon, drag-and-drop behavior, and automatic opening.
* **Tooltip:** Visibility, message text, background color, text color, and font size.
* **Disclaimer:** Title, message, colors for text, buttons, and background, including a blurred overlay option.
* **Chat Window:** Title, agent/user message display, welcome/error messages, background color/image, dimensions, font size, starter prompts, HTML rendering, message styling (colors, avatars), text input behavior (placeholder, colors, character limits, sounds), feedback options, date/time display, and footer customization.
* **Custom CSS:** Directly inject CSS code for even finer control over the appearance, overriding default styles as needed ([see the instructions guide below](#custom-css-modification))

```html
<script type="module">
  import Chatbot from 'https://cdn.jsdelivr.net/npm/flowise-embed/dist/web.js';
  Chatbot.init({
    chatflowid: 'your-chatflowid-here',
    apiHost: 'your-apihost-here',
    theme: {
      button: {
        backgroundColor: '#3B81F6',
        right: 20,
        bottom: 20,
        size: 48, // small | medium | large | number
        dragAndDrop: true,
        iconColor: 'white',
        customIconSrc: 'https://raw.githubusercontent.com/walkxcode/dashboard-icons/main/svg/google-messages.svg',
        autoWindowOpen: {
          autoOpen: true, //parameter to control automatic window opening
          openDelay: 2, // Optional parameter for delay time in seconds
          autoOpenOnMobile: false, //parameter to control automatic window opening in mobile
        },
      },
      tooltip: {
        showTooltip: true,
        tooltipMessage: 'Hi There 👋!',
        tooltipBackgroundColor: 'black',
        tooltipTextColor: 'white',
        tooltipFontSize: 16,
      },
      disclaimer: {
        title: 'Disclaimer',
        message: 'By using this chatbot, you agree to the <a target="_blank" href="https://flowiseai.com/terms">Terms & Condition</a>',
        textColor: 'black',
        buttonColor: '#3b82f6',
        buttonText: 'Start Chatting',
        buttonTextColor: 'white',
        blurredBackgroundColor: 'rgba(0, 0, 0, 0.4)', //The color of the blurred background that overlays the chat interface
        backgroundColor: 'white',
      },
      customCSS: ``, // Add custom CSS styles. Use !important to override default styles
      chatWindow: {
        showTitle: true,
        showAgentMessages: true,
        title: 'Flowise Bot',
        titleAvatarSrc: 'https://raw.githubusercontent.com/walkxcode/dashboard-icons/main/svg/google-messages.svg',
        titleTextColor: '#ffffff',
        titleBackgroundColor: '#3B81F6',
        welcomeMessage: 'Hello! This is custom welcome message',
        errorMessage: 'This is a custom error message',
        backgroundColor: '#ffffff',
        backgroundImage: 'enter image path or link', // If set, this will overlap the background color of the chat window.
        height: 700,
        width: 400,
        fontSize: 16,
        starterPrompts: ['What is a bot?', 'Who are you?'], // It overrides the starter prompts set by the chat flow passed
        starterPromptFontSize: 15,
        clearChatOnReload: false, // If set to true, the chat will be cleared when the page reloads
        sourceDocsTitle: 'Sources:',
        renderHTML: true,
        botMessage: {
          backgroundColor: '#f7f8ff',
          textColor: '#303235',
          showAvatar: true,
          avatarSrc: 'https://raw.githubusercontent.com/zahidkhawaja/langchain-chat-nextjs/main/public/parroticon.png',
        },
        userMessage: {
          backgroundColor: '#3B81F6',
          textColor: '#ffffff',
          showAvatar: true,
          avatarSrc: 'https://raw.githubusercontent.com/zahidkhawaja/langchain-chat-nextjs/main/public/usericon.png',
        },
        textInput: {
          placeholder: 'Type your question',
          backgroundColor: '#ffffff',
          textColor: '#303235',
          sendButtonColor: '#3B81F6',
          maxChars: 50,
          maxCharsWarningMessage: 'You exceeded the characters limit. Please input less than 50 characters.',
          autoFocus: true, // If not used, autofocus is disabled on mobile and enabled on desktop. true enables it on both, false disables it on both.
          sendMessageSound: true,
          // sendSoundLocation: "send_message.mp3", // If this is not used, the default sound effect will be played if sendSoundMessage is true.
          receiveMessageSound: true,
          // receiveSoundLocation: "receive_message.mp3", // If this is not used, the default sound effect will be played if receiveSoundMessage is true.
        },
        feedback: {
          color: '#303235',
        },
        dateTimeToggle: {
          date: true,
          time: true,
        },
        footer: {
          textColor: '#303235',
          text: 'Powered by',
          company: 'Flowise',
          companyLink: 'https://flowiseai.com',
        },
      },
    },
  });
</script>
```

**Note:** See full [configuration list](https://github.com/FlowiseAI/FlowiseChatEmbed#configuration)

## Custom Code Modification

To modify the full source code of embedded chat widget, follow these steps:

1. Fork the [Flowise Chat Embed](https://github.com/FlowiseAI/FlowiseChatEmbed) repository
2. Run `yarn install` to install the necessary dependencies
3. Then you can make any code changes
4. Run `yarn build` to pick up the changes
5. Push changes to the forked repository
6. You can then use your custom `web.js` as embedded chat like so:

Replace `username` to your Github username, and `forked-repo` to your forked repo.

<pre class="language-html"><code class="lang-html"><strong>&#x3C;script type="module">
</strong>      import Chatbot from "https://cdn.jsdelivr.net/gh/username/forked-repo/dist/web.js"
      Chatbot.init({
          chatflowid: "your-chatflowid-here",
          apiHost: "your-apihost-here",
      })
&#x3C;/script>
</code></pre>

<figure><img src="/files/WWsGO6dxspUBnCmBaW8B" alt="" width="563"><figcaption></figcaption></figure>

```html
<script type="module">
      import Chatbot from "https://cdn.jsdelivr.net/gh/HenryHengZJ/FlowiseChatEmbed-Test/dist/web.js"
      Chatbot.init({
          chatflowid: "your-chatflowid-here",
          apiHost: "your-apihost-here",
      })
</script>
```

{% hint style="info" %}
An alternative to jsdelivr is unpkg. Here is an example:

<pre><code><strong>https://unpkg.com/flowise-embed/dist/web.js
</strong></code></pre>

{% endhint %}

## Custom CSS Modification

You can now directly add custom CSS to style your embedded chat widget, eliminating the need for custom `web.js` files (requires v2.0.8 or later). This allows you to:

* Give each embedded chatbot a unique look and feel
* Use the official `web.js`—no more custom builds or hosting are needed for styling
* Update styles instantly

Here's how to use it:

```html
<script src="https://cdn.jsdelivr.net/gh/FlowiseAI/FlowiseChatEmbed@main/dist/web.js"></script>
<script>
  Chatbot.init({
    chatflowid: "your-chatflowid-here",
    apiHost: "your-apihost-here",
    theme: {
      // ... other theme settings
      customCSS: `
        /* Your custom CSS here */
        /* Use !important to override default styles */
      `,
    }
  });
</script>
```

## CORS

When using embedded chat widget, there's chance that you might face CORS issue like:

{% hint style="danger" %}
Access to fetch at 'https\://\<your-flowise.com>/api/v1/prediction/' from origin 'https\://\<your-flowise.com>' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
{% endhint %}

To fix it, specify the following environment variables:

```
CORS_ORIGINS=*
IFRAME_ORIGINS=*
```

For example, if you are using `npx flowise start`

```
npx flowise start --CORS_ORIGINS=* --IFRAME_ORIGINS=*
```

If using Docker, place the env variables inside `Flowise/docker/.env`

If using local Git clone, place the env variables inside `Flowise/packages/server/.env`

## Video Tutorials

These two videos will teach you how to embed the Flowise widget into a website.

{% embed url="<https://youtu.be/4paQ2wObDQ4>" %}

{% embed url="<https://youtu.be/XOeCV1xyN48>" %}


# Uploads

Learn how to use upload images, audio, and other files

Flowise lets you upload images, audio, and other files from the chat. In this section, you'll learn how to enable and use these features.

## Image

Certain chat models allow you to input images. Always refer to the official documentation of the LLM to confirm if the model supports image input.

* [ChatOpenAI](/integrations/llamaindex/chat-models/chatopenai)
* [AzureChatOpenAI](/integrations/llamaindex/chat-models/azurechatopenai)
* [ChatAnthropic](/integrations/langchain/chat-models/chatanthropic)
* [AWSChatBedrock](/integrations/langchain/chat-models/aws-chatbedrock)
* [ChatGoogleGenerativeAI](/integrations/langchain/chat-models/google-ai)
* [ChatOllama](/integrations/llamaindex/chat-models/chatollama)
* [Google Vertex AI](/integrations/langchain/llms/googlevertex-ai)

{% hint style="warning" %}
Image processing only works with certain chains/agents in Chatflow.

[LLMChain](/integrations/langchain/chains/llm-chain), [Conversation Chain](/integrations/langchain/chains/conversation-chain), [ReAct Agent](/integrations/langchain/agents/react-agent-chat), [Conversational Agent](/integrations/langchain/agents/conversational-agent), [Tool Agent](/integrations/langchain/agents/tool-agent)
{% endhint %}

If you enable **Allow Image Upload**, you can upload images from the chat interface.

<div align="center"><figure><img src="/files/ZJ9zxkT4vZpWXjdbOSGI" alt="" width="255"><figcaption></figcaption></figure> <figure><img src="/files/enQUBm6I8afTr318XMCR" alt="" width="290"><figcaption></figcaption></figure></div>

To upload images with the API:

{% tabs %}
{% tab title="Python" %}

```python
import requests
API_URL = "http://localhost:3000/api/v1/prediction/<chatflowid>"

def query(payload):
    response = requests.post(API_URL, json=payload)
    return response.json()
    
output = query({
    "question": "Can you describe the image?",
    "uploads": [
        {
            "data": "data:image/png;base64,iVBORw0KGgdM2uN0", # base64 string or url
            "type": "file", # file | url
            "name": "Flowise.png",
            "mime": "image/png"
        }
    ]
})
```

{% endtab %}

{% tab title="Javascript" %}

```javascript
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/prediction/<chatflowid>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            },
            body: JSON.stringify(data)
        }
    );
    const result = await response.json();
    return result;
}

query({
    "question": "Can you describe the image?",
    "uploads": [
        {
            "data": "data:image/png;base64,iVBORw0KGgdM2uN0", //base64 string or url
            "type": "file", // file | url
            "name": "Flowise.png",
            "mime": "image/png"
        }
    ]
}).then((response) => {
    console.log(response);
});
```

{% endtab %}
{% endtabs %}

## Audio

In the Chatflow Configuration, you can select a speech-to-text module. Supported integrations include:

* OpenAI
* AssemblyAI
* [LocalAI](/integrations/langchain/chat-models/chatlocalai)

When this is enabled, users can speak directly into the microphone. Their speech is be transcribed into text.

<div align="left"><figure><img src="/files/7TcRRf3OyGtSzYXbTXtK" alt="" width="563"><figcaption></figcaption></figure> <figure><img src="/files/AozD0kz67whb25pqzsOT" alt="" width="431"><figcaption></figcaption></figure></div>

To upload audio with the API:

{% tabs %}
{% tab title="Python" %}

```python
import requests
API_URL = "http://localhost:3000/api/v1/prediction/<chatflowid>"

def query(payload):
    response = requests.post(API_URL, json=payload)
    return response.json()
    
output = query({
    "uploads": [
        {
            "data": "data:audio/webm;codecs=opus;base64,GkXf", # base64 string
            "type": "audio",
            "name": "audio.wav",
            "mime": "audio/webm"
        }
    ]
})
```

{% endtab %}

{% tab title="Javascript" %}

```javascript
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/prediction/<chatflowid>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            },
            body: JSON.stringify(data)
        }
    );
    const result = await response.json();
    return result;
}

query({
    "uploads": [
        {
            "data": "data:audio/webm;codecs=opus;base64,GkXf", // base64 string
            "type": "audio",
            "name": "audio.wav",
            "mime": "audio/webm"
        }
    ]
}).then((response) => {
    console.log(response);
});
```

{% endtab %}
{% endtabs %}

## Files

You can upload files in two ways:

* Retrieval augmented generation (RAG) file uploads
* Full file uploads

When both options are on, full file uploads take precedence.

### RAG File Uploads

You can upsert uploaded files on the fly to the vector store. To enable file uploads, make sure you meet these prerequisites:

* You must include a vector store that supports file uploads in the chatflow.
  * [Pinecone](/integrations/langchain/vector-stores/pinecone)
  * [Milvus](/integrations/langchain/vector-stores/milvus)
  * [Postgres](/integrations/langchain/vector-stores/postgres)
  * [Qdrant](/integrations/langchain/vector-stores/qdrant)
  * [Upstash](/integrations/langchain/vector-stores/upstash-vector)
* If you have multiple vector stores in a chatflow, you can only turn on file upload for one vector store at a time.
* You must connect at least one document loader node to the vector store's document input.
* Supported document loaders:
  * [CSV File](/integrations/langchain/document-loaders/csv-file)
  * [Docx File](/integrations/langchain/document-loaders/docx-file)
  * [Json File](/integrations/langchain/document-loaders/json-file)
  * [Json Lines File](broken://pages/5Yx4z3cCteIRfL5w2Ihp)
  * [PDF File](/integrations/langchain/document-loaders/pdf-file)
  * [Text File](/integrations/langchain/document-loaders/text-file)
  * [Unstructured File](/integrations/langchain/document-loaders/unstructured-file-loader)

<figure><img src="/files/7hCxmbsNW7TTABk8FeVU" alt=""><figcaption></figcaption></figure>

You can upload one or more files in the chat:

<div align="left"><figure><img src="/files/Y6LmcvyEQDdyltfw8XEG" alt="" width="380"><figcaption></figcaption></figure> <figure><img src="/files/3AEBDSdL3kzJGgkB8xdy" alt=""><figcaption></figcaption></figure></div>

Here's how it works:

1. The metadata for uploaded files is updated with the chatId.
2. This associates the file with the chatId.
3. When querying, an **OR** filter applies:

* Metadata contains `flowise_chatId`, and the value is the current chat session ID
* Metadata does not contain `flowise_chatId`

An example of a vector embedding upserted on Pinecone:

<figure><img src="/files/b2OZxxljrwkwTw7lrQ3W" alt=""><figcaption></figcaption></figure>

To do this with the API, follow these two steps:

1. Use the [Vector Upsert API](broken://pages/F2AfRpI7qYixNiBWpmIe#vector-upsert-api) with `formData` and `chatId`:

{% tabs %}
{% tab title="Python" %}

```python
import requests

API_URL = "http://localhost:3000/api/v1/vector/upsert/<chatflowid>"

# Use form data to upload files
form_data = {
    "files": ("state_of_the_union.txt", open("state_of_the_union.txt", "rb"))
}

body_data = {
    "chatId": "some-session-id"
}

def query(form_data):
    response = requests.post(API_URL, files=form_data, data=body_data)
    print(response)
    return response.json()

output = query(form_data)
print(output)
```

{% endtab %}

{% tab title="Javascript" %}

```javascript
// Use FormData to upload files
let formData = new FormData();
formData.append("files", input.files[0]);
formData.append("chatId", "some-session-id");

async function query(formData) {
    const response = await fetch(
        "http://localhost:3000/api/v1/vector/upsert/<chatflowid>",
        {
            method: "POST",
            body: formData
        }
    );
    const result = await response.json();
    return result;
}

query(formData).then((response) => {
    console.log(response);
});
```

{% endtab %}
{% endtabs %}

2. Use the [Prediction API](broken://pages/F2AfRpI7qYixNiBWpmIe#prediction) with `uploads` and the `chatId` from step 1:

{% tabs %}
{% tab title="Python" %}

```python
import requests
API_URL = "http://localhost:3000/api/v1/prediction/<chatflowid>"

def query(payload):
    response = requests.post(API_URL, json=payload)
    return response.json()
    
output = query({
    "question": "What is the speech about?",
    "chatId": "same-session-id-from-step-1",
    "uploads": [
        {
            "data": "data:text/plain;base64,TWFkYWwcy4=",
            "type": "file:rag",
            "name": "state_of_the_union.txt",
            "mime": "text/plain"
        }
    ]
})
```

{% endtab %}

{% tab title="Javascript" %}

```javascript
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/prediction/<chatflowid>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            },
            body: JSON.stringify(data)
        }
    );
    const result = await response.json();
    return result;
}

query({
    "question": "What is the speech about?",
    "chatId": "same-session-id-from-step-1",
    "uploads": [
        {
            "data": "data:text/plain;base64,TWFkYWwcy4=",
            "type": "file:rag",
            "name": "state_of_the_union.txt",
            "mime": "text/plain"
        }
    ]
}).then((response) => {
    console.log(response);
});
```

{% endtab %}
{% endtabs %}

### Full File Uploads

With RAG file uploads, you can't work with structured data like spreadsheets or tables, and you can't perform full summarization due to lack of full context. In some cases, you might want to include all the file content directly in the prompt for an LLM, especially with models like Gemini and Claude that have longer context windows. [This research paper](https://arxiv.org/html/2407.16833v1) is one of many that compare RAG with longer context windows.

To enable full file uploads, go to **Chatflow Configuration**, open the **File Upload** tab, and click the switch:

<figure><img src="/files/xU9QsOxAzK58MZTIfpM1" alt=""><figcaption></figcaption></figure>

You can see the **File Attachment** button in the chat, where you can upload one or more files. Under the hood, the [File Loader](/integrations/langchain/document-loaders/file-loader) processes each file and converts it into text.

<figure><img src="/files/Ue3U4NSubNUV81sXI0xk" alt=""><figcaption></figcaption></figure>

Note that if your chatflow uses a Chat Prompt Template node, an input must be created from **Format Prompt Values** to pass the file data. The specified input name (e.g. {file}) should be included in the **Human Message** field.

<figure><img src="/files/1IQUTfMlLaUaIUkll9IU" alt=""><figcaption></figcaption></figure>

To upload files with the API:

{% tabs %}
{% tab title="Python" %}

```python
import requests
API_URL = "http://localhost:3000/api/v1/prediction/<chatflowid>"

def query(payload):
    response = requests.post(API_URL, json=payload)
    return response.json()
    
output = query({
    "question": "What is the data about?",
    "chatId": "some-session-id",
    "uploads": [
        {
            "data": "data:text/plain;base64,TWFkYWwcy4=",
            "type": "file:full",
            "name": "state_of_the_union.txt",
            "mime": "text/plain"
        }
    ]
})
```

{% endtab %}

{% tab title="Javascript" %}

```javascript
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/prediction/<chatflowid>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            },
            body: JSON.stringify(data)
        }
    );
    const result = await response.json();
    return result;
}

query({
    "question": "What is the data about?",
    "chatId": "some-session-id",
    "uploads": [
        {
            "data": "data:text/plain;base64,TWFkYWwcy4=",
            "type": "file:full",
            "name": "state_of_the_union.txt",
            "mime": "text/plain"
        }
    ]
}).then((response) => {
    console.log(response);
});
```

{% endtab %}
{% endtabs %}

As you can see in the examples, uploads require a base64 string. To get a base64 string for a file, use the [Create Attachments API](/api-reference/attachments).

### Difference between Full & RAG Uploads

Both Full and RAG (Retrieval-Augmented Generation) file uploads serve different purposes.

* **Full File Upload**: This method parses the entire file into a string and sends it to the LLM (Large Language Model). It's beneficial for summarizing the document or extracting key information. However, with very large files, the model might produce inaccurate results or "hallucinations" due to token limitations.
* **RAG File Upload**: Recommended if you aim to reduce token costs by not sending the entire text to the LLM. This approach is suitable for Q\&A tasks on the documents but isn't ideal for summarization since it lacks the full document context. This approach might takes longer time because of the upsert process.


# Variables

Learn how to use variables in Flowise

***

Flowise allow users to create variables that can be used in the nodes. Variables can be Static or Runtime.

### Static

Static variable will be saved with the value specified, and retrieved as it is.

<figure><img src="/files/KtW8JJAzCMyzolCqhBZQ" alt="" width="542"><figcaption></figcaption></figure>

### Runtime

Value of the variable will be fetched from **.env** file using `process.env`

<figure><img src="/files/PzgOnFrkvAH6xBaCJ27q" alt="" width="537"><figcaption></figcaption></figure>

### Override or setting variable through API

In order to override variable value, user must explicitly enable it from the top right button:

**Settings** -> **Configuration** -> **Security** tab:

<figure><img src="/files/GrPorR4buq1s42Io7s3Q" alt=""><figcaption></figcaption></figure>

If there is an existing variable created, variable value provided in the API will override the existing value.

```json
{
    "question": "hello",
    "overrideConfig": {
        "vars": {
            "var": "some-override-value"
        }
    }
}
```

### Using Variables

Variables can be used by the nodes in Flowise. For instance, a variable named **`character`** is created:

<figure><img src="/files/HfDo7afmqgdGwUSqVXGd" alt=""><figcaption></figcaption></figure>

We can then use this variable as **`$vars.<variable-name>`** in the Function of the following nodes:

* [Custom Tool](/integrations/langchain/tools/custom-tool)
* [Custom Function](/integrations/utilities/custom-js-function)
* [Custom Loader](/integrations/langchain/document-loaders/custom-document-loader)
* [If Else](/integrations/utilities/if-else)
* Custom MCP

<figure><img src="/files/oXJ5wE2mZOaAfswjRlsc" alt="" width="283"><figcaption></figcaption></figure>

Besides, user can also use the variable in text input of any node with the following format:

**`{{$vars.<variable-name>}}`**

For example, in Agent System Message:

<figure><img src="/files/FSSwKMKoflqkE0E2cpr4" alt="" width="508"><figcaption></figcaption></figure>

In Prompt Template:

<figure><img src="/files/FMhqdNhsYISR6TTplzCk" alt=""><figcaption></figcaption></figure>

## Resources

* [Pass Variables to Function](/integrations/langchain/tools/custom-tool#pass-variables-to-function)


# Workspaces

{% hint style="info" %}
Workspaces are only available for Cloud and Enterprise plans
{% endhint %}

Upon your initial login, a default workspace will be automatically generated for you. Workspaces serve to partition resources among various teams or business units. Inside each workspace, Role-Based Access Control (RBAC) is used to manage permissions and access, ensuring users have access only to the resources and settings required for their role.

<figure><img src="/files/kB73LMGa8mSDtuI7C4KM" alt=""><figcaption></figcaption></figure>

## Setting up Admin Account

<details>

<summary>For self-hosted enterprise, following env variables must be set</summary>

```
JWT_AUTH_TOKEN_SECRET
JWT_REFRESH_TOKEN_SECRET
JWT_ISSUER
JWT_AUDIENCE
JWT_TOKEN_EXPIRY_IN_MINUTES
JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES
PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS
PASSWORD_SALT_HASH_ROUNDS
TOKEN_HASH_SECRET
```

</details>

By default, new installation of Flowise will require an admin setup, similar to how you have to setup a root user for your database initially.

<figure><img src="/files/2t3YdT3eQiXWkp1T460n" alt="" width="478"><figcaption></figcaption></figure>

After setting up, user will be brought to Flowise dashboard. From the left side bar, you will see User & Workspace Management section. A default workspace was automatically created.

<figure><img src="/files/Vp1d80ZqnbIIvrj9MEDj" alt=""><figcaption></figcaption></figure>

## Creating Workspace

To create a new Workspace, click Add New:

<figure><img src="/files/JBrthtVHG3v8OXGaqlBx" alt=""><figcaption></figcaption></figure>

You will see yourself added as the Organization Admin in the workspace you created.

<figure><img src="/files/2nX5gqzsyODme72jcSe1" alt=""><figcaption></figcaption></figure>

To invite new users to the workspace, you need to create a Role first.

## Creating Role

Navigate to Roles in the left side bar, and click Add Role:

<figure><img src="/files/KI2yR6rgKrNYvC4VxAxM" alt=""><figcaption></figcaption></figure>

User can specify granular control of permissions for each resources. The only exceptions are the resources in **User & Workspace Management** (Roles, Users, Workspaces, Login Activity). These are only available for Account Admin for now.

Here, we create an editor role which has access to everything. And another role with view-only permissions.

<figure><img src="/files/7MHqh8WfcXKNl7WY8j63" alt=""><figcaption></figcaption></figure>

## Invite User

<details>

<summary>For self-hosted enterprise, the following env variables must be set</summary>

```
INVITE_TOKEN_EXPIRY_IN_HOURS
SMTP_HOST
SMTP_PORT
SMTP_USER
SMTP_PASSWORD
```

</details>

Navigate to Users in left side bar, you will see yourself as the account admin. This is indicated by the person icon with a star:

<figure><img src="/files/PlYAXZVG2FZuB3hQACkT" alt=""><figcaption></figcaption></figure>

Click Invite User, and enter email to be invited, the workspace to be assigned, and the role as well.

<figure><img src="/files/rCHCFxdBQfBunUGGOZwV" alt=""><figcaption></figcaption></figure>

Click Send Invite. The invited email will receive an invitation:

<figure><img src="/files/IMYuD5C33vWbYQXF5uJw" alt=""><figcaption></figcaption></figure>

Upon clicking the invitation link, invited user will be brought to a Sign Up page.

<figure><img src="/files/l57l7e5IReMOXmzuwTbc" alt="" width="463"><figcaption></figcaption></figure>

After signed up and logged in as invited user, you will be in the workspace assigned, and there will be no User & Workspace Management section:

<figure><img src="/files/3qOMjTxly3qlek9q8HiN" alt=""><figcaption></figcaption></figure>

If you are invited into multiple workspaces, you can switch to different workspaces from the top right dropdown button. Here we are assigned to Workspace 2 with **view only** permission. You can notice the Add New button for Chatflow is no longer visible. This ensure user can only view, not create, update nor delete. The same RBAC rules apply for API as well.

<figure><img src="/files/SqCqhp3WGr0lSd7iG7Vs" alt=""><figcaption></figcaption></figure>

Now, back to Account Admin, you will be able to see the users invited, their status, roles, and active workspace:

<figure><img src="/files/Pixk21iQjEzCkrJrNCl3" alt=""><figcaption></figcaption></figure>

Account admin can also modify the settings for other users:

<figure><img src="/files/K2aHl8bcNFAB94cZYdX9" alt=""><figcaption></figcaption></figure>

## Login Activity

Admin will be able to see every login and logout from all users:

<figure><img src="/files/JVRAl06ixbypxG4Y4zRA" alt=""><figcaption></figcaption></figure>

## Creating item in Workspace

Every items created in a workspace, are isolated from another workspace. Workspaces are a way to logically group users and resources within an organization, ensuring separate trust boundaries for resource management and access control. It is recommended to create distinct workspaces for each team.

Here, we create a Chatflow named **Chatflow1** in **Workspace1**:

<figure><img src="/files/7AnGlxY7jk6L6YAJYpE9" alt=""><figcaption></figcaption></figure>

When we switch to **Workspace2**, **Chatflow1** will not be visible. This applies to every resources such as Agentflows, Tools, Assistants, etc.

<figure><img src="/files/Y70gQMWXLwv8psNEyBXK" alt=""><figcaption></figcaption></figure>

The diagram below illustrates the relationship between organizations, workspaces, and the various resources associated with and contained within a workspace.

<figure><img src="/files/kB73LMGa8mSDtuI7C4KM" alt=""><figcaption></figcaption></figure>

## Sharing Credential

You can share credential to other workspaces. This allow users to reuse same set of credentials in different workspaces.

After creating a credential, Account Admin or user with Share Credential permission from the RBAC will be able to click Share:

<figure><img src="/files/45lXJBiqrggAYTt9EBID" alt=""><figcaption></figcaption></figure>

User can select the workspaces to share the credential with:

<figure><img src="/files/PWuUXTLYU5PwDNohrNpW" alt=""><figcaption></figcaption></figure>

Now, switch to the workspace where the credential was shared, you will see the Shared Credential. User is not able to edit shared credential.

<figure><img src="/files/0gmDjP3QBzdamxWSqDf7" alt=""><figcaption></figcaption></figure>

## Deleting a Workspace

Currently only Account Admin can delete workspaces. By default, you are not able to delete a workspace if there are still users within that workspace.

<figure><img src="/files/dhZWE84WxvMZbUwzQHsX" alt=""><figcaption></figcaption></figure>

You will need to unlink all of the invited users first. This allow flexibility in case you just want to remove certain users from a workspace. Note that Organization Owner who created the workspace is not able to be unlinked from a workspace.

<figure><img src="/files/Uhi50DfpWo81DtY5VazD" alt=""><figcaption></figcaption></figure>

After unlinking invited users, and the only user left within the workspace is the Organization Owner, delete button is now clickable:

<figure><img src="/files/rHSSmpUqtqDUim9DUzu8" alt=""><figcaption></figcaption></figure>

Deleting a workspace is an irreversible action and will cascade delete all items within that workspace. You will see a warning box:

<figure><img src="/files/eHu2U7wHDiYEsMMDGWCz" alt=""><figcaption></figcaption></figure>

After deleting a workspace, user will fallback to the Default workspace. Default workspace that was automatically created at the start is not able to be deleted.


# Evaluations

{% hint style="info" %}
Evaluations are only available for Cloud and Enterprise plan
{% endhint %}

Evaluations help you monitor and understand the performance of your Chatflow/Agentflow application. On the high level, an evaluation is a process that takes a set of inputs and corresponding outputs from your Chatflow/Agentflow, and generates scores. These scores can be derived by comparing outputs to reference results, such as through string matching, numeric comparison, or even leveraging an LLM as a judge. These evaluations are conducted using Datasets and Evaluators.

## Datasets

Datasets are the inputs that will be used to run your Chatflow/Agentflow, along with the corresponding outputs for comparison. User can add the input and anticipated output manually, or upload a CSV file with 2 columns: Input and Output.

<figure><img src="/files/8ZgkmzZKLRZjrVEHMMkn" alt=""><figcaption></figcaption></figure>

| Input                             | Output                       |
| --------------------------------- | ---------------------------- |
| What is the capital of UK         | Capital of UK is London      |
| How many days are there in a year | There are 365 days in a year |

## Evaluators

Evaluators are like unit tests. During an evaluation, the inputs from Datasets are ran on the selected flows and the outputs are evaluated using selected evaluators. There are 3 types of evaluators:

* **Text Based**: string based checking:
  * Contains Any
  * Contains All
  * Does Not Contains Any
  * Does Not Contains All
  * Starts With
  * Does Not Starts With

<figure><img src="/files/LO3zPd6IAGFQ0neY3Jfs" alt=""><figcaption></figcaption></figure>

* **Numeric Based:** numbers type checking:
  * Total Tokens
  * Prompt Tokens
  * Completion Tokens
  * API Latency
  * LLM Latency
  * Chatflow Latency
  * Agentflow Latency (coming)
  * Output Characters Length

<figure><img src="/files/nqrk7BJ4qLu1jIhh7Tep" alt=""><figcaption></figcaption></figure>

* **LLM Based**: using another LLM to grade the output
  * Hallucination
  * Correctness

<figure><img src="/files/zxOVeXGSGnIgVi9DkFCu" alt=""><figcaption></figcaption></figure>

## Evaluations

Now that we have Datasets and Evaluators prepared, we can start running an evaluation.

1.) Select dataset and chatflow to evaluate. You can select multiple datasets and chatflows. Using the example below, every inputs from Dataset1 will be ran against 2 chatflows. Since Dataset1 has 2 inputs, a total of 4 outputs will be produced and evaluated.

<figure><img src="/files/cyYTPJme0g2Y3fk9HaQH" alt=""><figcaption></figcaption></figure>

2.) Select the evaluators. Only string based and numeric based evaluators are available to be selected at this stage.

<figure><img src="/files/gzp3vYUQZFkXy16TNrUo" alt=""><figcaption></figcaption></figure>

3.) (Optional) Select LLM Based evaluator. Start Evaluation:

<figure><img src="/files/4IhOPHXXrrVTnlaUDJGy" alt=""><figcaption></figcaption></figure>

4.) Wait for evaluation to be completed:

<figure><img src="/files/JZN0FRYgxRagsz5KrtwL" alt=""><figcaption></figcaption></figure>

5.) After evaluation is completed, click the graph icon at the right side to view the details:

<figure><img src="/files/F92GLsibPhUSObjiHJSV" alt=""><figcaption></figcaption></figure>

The 3 charts above show the summary of the evaluation:

* Pass/fail rate
* Average prompt and completion tokens used
* Average latency of the request

Table below the charts shows the details of each execution.

<figure><img src="/files/HEiXDiwq7c798agiBwTf" alt=""><figcaption></figcaption></figure>

<figure><img src="/files/MquAuMzyX6L4yvNrdLZk" alt="" width="355"><figcaption></figcaption></figure>

### Re-run evaluation

When the flows used on evaluation have been updated/modified, a warning message will be shown:

<figure><img src="/files/OmAGb77VAiepX21h9xQG" alt=""><figcaption></figcaption></figure>

You can re-run the same evaluation using the Re-Run Evaluation button at the top right corner. You will be able to see the different versions:

<figure><img src="/files/w8vhCH5ECC3HivqBJp7D" alt=""><figcaption></figcaption></figure>

You can also view and compare the results from different versions:

<figure><img src="/files/DIwlEAb6JGjDBZ4Rlq7m" alt=""><figcaption></figcaption></figure>

## Video Tutorial

{% embed url="<https://youtu.be/kgUttHMkGFg?si=3rLplEp_0TI0p6UV&t=486>" %}


# Configuration

Learn how to set up and run Flowise instances

***

This section will guide you through various configuration options to customize your Flowise instances for development, testing, and production environments.

We'll also provide in-depth guides for deploying Flowise on different Platform as a Service (PaaS) options, ensuring a smooth and successful deployment.

## Guides

* [Auth](/configuration/authorization)
* [Databases](/configuration/databases)
* [Deployment](/configuration/deployment)
* [Environment Variables](/configuration/environment-variables)
* [Rate Limit](/configuration/rate-limit)
* [Proxy](/configuration/running-flowise-behind-company-proxy)
* [SSO](/configuration/sso)
* [Queue Mode](/configuration/running-flowise-using-queue)
* [Production Ready](/configuration/running-in-production)


# Auth

Learn how to secure your Flowise Instances

***

This section guides you through configuring security with Flowise, focusing on authentication mechanisms at the application and chatflow levels.

By implementing robust authentication, you can protect your Flowise instances and ensure only authorized users can access and interact with your chatflows.

## Supported Methods

* [App level](/configuration/authorization/app-level)
* [Chatflow level](/configuration/authorization/chatflow-level)


# Application

Learn how to set up app-level access control for your Flowise instances

***

## Email & Password

From v3.0.1 onwards, a new authentication method was introduced. Flowise uses a [**Passport.js**](https://www.passportjs.org/)**-based authentication system** with JWT tokens stored in secure HTTP-only cookies. When a user logs in, the system validates their email/password against the database using bcrypt hash comparison, then generates two JWT tokens: a short-lived access token (default 60 minutes) and a long-lived refresh token (default 90 days). These tokens are stored as secure cookies. For subsequent requests, the system extracts the JWT from cookies, validates the signature and claims using Passport's JWT strategy, and checks that the user session still exists. The system also supports automatic token refresh when the access token expires, maintains sessions using either Redis or database storage depending on configuration.

For existing users who have been using [Username & Password (Deprecated)](#username-and-password-deprecated), you need to set up a new admin account. To prevent unauthorized ownership claims, you must first authenticate using the existing username and password configured as `FLOWISE_USERNAME` and `FLOWISE_PASSWORD`.

<figure><img src="/files/n0bemWMXMQ6AgsfD57Zh" alt="" width="387"><figcaption></figcaption></figure>

The following environment variables can be altered:

### Application URL

* `APP_URL` - Your hosted Flowise appication URL. Default to `http://localhost:3000`

### JWT Environment Variables Configuration

To configure Flowise's JWT authentication parameters, user may alter the following environment variables:

* `JWT_AUTH_TOKEN_SECRET` - The secret key for signing access tokens
* `JWT_REFRESH_TOKEN_SECRET` - Secret for refresh tokens (defaults to auth token secret if not set)
* `JWT_TOKEN_EXPIRY_IN_MINUTES` - Access token lifetime (default: 60 minutes)
* `JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES` - Refresh token lifetime (default: 129,600 minutes or 90 days)
* `JWT_AUDIENCE` - Token validation audience claim (default: 'AUDIENCE')
* `JWT_ISSUER` - Token validation issuer claim (default: 'ISSUER')
* `EXPRESS_SESSION_SECRET` - Session encryption secret (default: 'flowise')
* `EXPIRE_AUTH_TOKENS_ON_RESTART` - Set to 'true' to invalidate all tokens on server restart (useful for development)

### SMTP Email Configuration

Configure these variables to enable email functionality for password resets, and notifications:

* `SMTP_HOST` - The hostname of your SMTP server (e.g., `smtp.gmail.com`, `smtp.host.com`)
* `SMTP_PORT` - The port number for SMTP connection (common values: `587` for TLS, `465` for SSL, `25` for unencrypted)
* `SMTP_USER` - Username for SMTP authentication (usually your email address)
* `SMTP_PASSWORD` - Password or app-specific password for SMTP authentication
* `SMTP_SECURE` - Set to `true` for SSL/TLS encryption, `false` for unencrypted connections
* `ALLOW_UNAUTHORIZED_CERTS` - Set to `true` to allow self-signed certificates (not recommended for production)
* `SENDER_EMAIL` - The "from" email address that will appear on outgoing emails

### Security and Token Configuration

These variables control authentication security, token expiration, and password hashing:

* `PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS` - Expiration time for password reset tokens (default: 15 minutes)
* `PASSWORD_SALT_HASH_ROUNDS` - Number of bcrypt salt rounds for password hashing (default: 10, higher = more secure but slower)
* `TOKEN_HASH_SECRET` - Secret key used for hashing tokens and sensitive data (use a strong, random string)

### Security Best Practices

{% hint style="warning" %}
We recommend configuring your own JWT and Secret tokens environment variables; otherwise, default values will be used, which could increase the chances of attackers to forge valid tokens and impersonate users.
{% endhint %}

* Use strong, unique values for `TOKEN_HASH_SECRET` and store them securely
* For production, use `SMTP_SECURE=true` and `ALLOW_UNAUTHORIZED_CERTS=false`
* Set appropriate token expiry times based on your security requirements
* Use higher `PASSWORD_SALT_HASH_ROUNDS` values (12-15) for better security in production

## Username & Password (Deprecated)

App level authorization protects your Flowise instance by username and password. This protects your apps from being accessible by anyone when deployed online.

<figure><img src="/files/OCRrH5YfLgrQcE5YBXnz" alt=""><figcaption></figcaption></figure>

### How to Set Username & Password

#### Npm

1. Install Flowise

```bash
npm install -g flowise
```

2. Start Flowise with username & password

```bash
npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
```

3. Open <http://localhost:3000>

#### Docker

1. Navigate to `docker` folder

```
cd docker
```

2. Create `.env` file and specify the `PORT`, `FLOWISE_USERNAME`, and `FLOWISE_PASSWORD`

```sh
PORT=3000
FLOWISE_USERNAME=user
FLOWISE_PASSWORD=1234
```

3. Pass `FLOWISE_USERNAME` and `FLOWISE_PASSWORD` to the `docker-compose.yml` file:

```
environment:
    - PORT=${PORT}
    - FLOWISE_USERNAME=${FLOWISE_USERNAME}
    - FLOWISE_PASSWORD=${FLOWISE_PASSWORD}
```

4. `docker compose up -d`
5. Open <http://localhost:3000>
6. You can bring the containers down by `docker compose stop`

#### Git clone

To enable app level authentication, add `FLOWISE_USERNAME` and `FLOWISE_PASSWORD` to the `.env` file in `packages/server`:

```
FLOWISE_USERNAME=user
FLOWISE_PASSWORD=1234
```


# Flows

Learn how to set up chatflow-level access control for your Flowise instances

***

After you have a chatflow / agentflow constructed, by default, your flow is available to public. Anyone that has access to the Chatflow ID is able to run prediction through Embed or API.

In cases where you might want to allow certain people to be able to access and interact with it, you can do so by assigning an API key for that specific chatflow.

## API Key

In dashboard, navigate to API Keys section, and you should be able to see a DefaultKey created. You can also add or delete any keys.

<figure><img src="/files/34U63mzCULgJ3DG0CbE3" alt=""><figcaption></figcaption></figure>

## Chatflow

Navigate to the chatflow, and now you can select the API Key you want to use to protect the chatflow.

<figure><img src="/files/zv0D4iUW1zG3Xt1aongw" alt=""><figcaption></figcaption></figure>

After assigning an API key, one can only access the chatflow API when the Authorization header is provided with the correct API key specified during a HTTP call.

```json
"Authorization": "Bearer <your-api-key>"
```

An example of calling the API using POSTMAN

<figure><img src="/files/icKNDT1ecctjmSs2EyWP" alt=""><figcaption></figcaption></figure>


# Databases

Learn how to connect your Flowise instance to a database

***

## Setup

Flowise supports 4 database types:

* SQLite
* MySQL
* PostgreSQL
* MariaDB

### SQLite (Default)

SQLite will be the default database. These databases can be configured with following env variables:

```sh
DATABASE_TYPE=sqlite
DATABASE_PATH=/root/.flowise #your preferred location
```

A `database.sqlite` file will be created and saved in the path specified by `DATABASE_PATH`. If not specified, the default store path will be in your home directory -> .flowise

**Note:** If none of the env variables is specified, SQLite will be the fallback database choice.

### MySQL

```sh
DATABASE_TYPE=mysql
DATABASE_PORT=3306
DATABASE_HOST=localhost
DATABASE_NAME=flowise
DATABASE_USER=user
DATABASE_PASSWORD=123
```

### PostgreSQL

```sh
DATABASE_TYPE=postgres
DATABASE_PORT=5432
DATABASE_HOST=localhost
DATABASE_NAME=flowise
DATABASE_USER=user
DATABASE_PASSWORD=123
PGSSLMODE=require
```

### MariaDB

```bash
DATABASE_TYPE="mariadb"
DATABASE_PORT="3306"
DATABASE_HOST="localhost"
DATABASE_NAME="flowise"
DATABASE_USER="flowise"
DATABASE_PASSWORD="mypassword"
```

### How to use Flowise databases SQLite and MySQL/MariaDB

{% embed url="<https://youtu.be/R-6uV1Cb8I8>" %}

## Backup

1. Shut down FlowiseAI application.
2. Ensure that the database connection to other applications is turned off.
3. Backup your database.
4. Test backup database.

### SQLite

1. Rename file name.

   Windows:

   ```bash
   rename "DATABASE_PATH\database.sqlite" "DATABASE_PATH\BACKUP_FILE_NAME.sqlite"
   ```

   Linux:

   ```bash
   mv DATABASE_PATH/database.sqlite DATABASE_PATH/BACKUP_FILE_NAME.sqlite
   ```
2. Backup database.

   Windows:

   ```bash
   copy DATABASE_PATH\BACKUP_FILE_NAME.sqlite DATABASE_PATH\database.sqlite
   ```

   Linux:

   ```bash
   cp DATABASE_PATH/BACKUP_FILE_NAME.sqlite DATABASE_PATH/database.sqlite
   ```
3. Test backup database by running Flowise.

### PostgreSQL

1. Backup database.

   ```bash
   pg_dump -U USERNAME -h HOST -p PORT -d DATABASE_NAME -f /PATH/TO/BACKUP_FILE_NAME.sql
   ```
2. Enter database password.
3. Create test database.

   ```bash
   psql -U USERNAME -h HOST -p PORT -d TEST_DATABASE_NAME -f /PATH/TO/BACKUP_FILE_NAME.sql
   ```
4. Test the backup database by running Flowise with the `.env` file modified to point to the backup database.

### MySQL & MariaDB

1. Backup database.

   ```bash
   mysqldump -u USERNAME -p DATABASE_NAME > BACKUP_FILE_NAME.sql
   ```
2. Enter database password.
3. Create test database.

   ```bash
   mysql -u USERNAME -p TEST_DATABASE_NAME < BACKUP_FILE_NAME.sql
   ```
4. Test the backup database by running Flowise with the `.env` file modified to point to the backup database.


# Deployment

Learn how to deploy Flowise to the cloud

***

Flowise is designed with a platform-agnostic architecture, ensuring compatibility with a wide range of deployment environments to suit your infrastructure needs.

## Local Machine

To deploy Flowise locally, follow our [Get Started](/getting-started) guide.

## Modern Cloud Providers

Modern cloud platforms prioritize automation and focus on developer workflows, simplifying cloud management and ongoing maintenance.

This reduces the technical expertise needed, but may limit the level of customization you have over the underlying infrastructure.

* [Elestio](https://elest.io/open-source/flowiseai)
* [Hugging Face](/configuration/deployment/hugging-face)
* [Northflank](https://northflank.com/stacks/deploy-flowiseai)
* [Railway](/configuration/deployment/railway)
* [Render](/configuration/deployment/render)
* [Replit](/configuration/deployment/replit)
* [RepoCloud](https://repocloud.io/details/?app_id=29)
* [Sealos](/configuration/deployment/sealos)
* [Zeabur](/configuration/deployment/zeabur)

## Established Cloud Providers

Established cloud providers, on the other hand, require a higher level of technical expertise to manage and optimize for your specific needs.

This complexity, however, also grants greater flexibility and control over your cloud environment.

* [AWS](/configuration/deployment/aws)
* [Azure](/configuration/deployment/azure)
* [DigitalOcean](/configuration/deployment/digital-ocean)
* [GCP](/configuration/deployment/gcp)
* [Kubernetes using Helm](https://artifacthub.io/packages/helm/cowboysysop/flowise)


# AWS

Learn how to deploy Flowise on AWS

***

## Prerequisite

This requires some basic understanding of how AWS works.

Two options are available to deploy Flowise on AWS:

* [Deploy on ECS using CloudFormation](#deploy-on-ecs-using-cloudformation)
* [Manually configure an EC2 Instance](#launch-ec2-instance)

## Deploy on ECS using CloudFormation

CloudFormation template is available here: <https://gist.github.com/MrHertal/549b31a18e350b69c7200ae8d26ed691>

It deploys Flowise on an ECS cluster exposed through ELB.

It was inspired by this reference architecture: <https://github.com/aws-samples/ecs-refarch-cloudformation>

Feel free to edit this template to adapt things like Flowise image version, environment variables etc.

Example of command to deploy Flowise using the [AWS CLI](https://aws.amazon.com/fr/cli/):

```bash
aws cloudformation create-stack --stack-name flowise --template-body file://flowise-cloudformation.yml --capabilities CAPABILITY_IAM
```

After deployment, the URL of your Flowise application is available in the CloudFormation stack outputs.

## Deploy on ECS using Terraform

The Terraform files (`variables.tf`, `main.tf`) are available in this GitHub repository: [terraform-flowise-setup](https://github.com/huiseo/terraform-flowise-setup/tree/main).

This setup deploys Flowise on an ECS cluster exposed through an Application Load Balancer (ALB). It is based on AWS best practices for ECS deployments.

You can modify the Terraform template to adjust:

* Flowise image version
* Environment variables
* Resource configurations (CPU, memory, etc.)

### Example Commands for Deployment:

1. **Initialize Terraform:**

```bash
terraform init
terraform apply
terraform destroy
```

## Launch EC2 Instance

1. In the EC2 dashboard, click **Launch Instance**

<figure><img src="/files/iMH41soyeaVV4wNcNJMP" alt=""><figcaption></figcaption></figure>

2. Scroll down and **Create new key pair** if you don't have one

<figure><img src="/files/NtFk7F6lZ04cpXXxgn2i" alt="" width="375"><figcaption></figcaption></figure>

3. Fill in your preferred key pair name. For Windows, we will use `.ppk` and PuTTY to connect to the instance. For Mac and Linux, we will use `.pem` and OpenSSH

<figure><img src="/files/XnOerfbpxsNOnkrkqF0z" alt="" width="370"><figcaption></figcaption></figure>

4. Click **Create key pair** and select a location path to save the `.ppk` file
5. Open the left side bar, and open a new tab from **Security Groups**. Then **Create security group**

<figure><img src="/files/JN0VqNXEvtaICPOHtcq3" alt=""><figcaption></figcaption></figure>

6. Fill in your preferred security group name and description. Next, add the following to Inbound Rules and **Create security group**

<figure><img src="/files/SD6GNz7q7ZL6sEhnNU0J" alt=""><figcaption></figcaption></figure>

7. Back to the first tab (EC2 Launch an instance) and scroll down to **Network settings**. Select the security group you've just created

<figure><img src="/files/g9VFt8oVoxYlAlTTnZCo" alt="" width="375"><figcaption></figcaption></figure>

8. Click **Launch instance**. Navigate back to EC2 Dashboard, after few mins we should be able to see a new instance up and running [🎉](https://emojipedia.org/party-popper/)

<figure><img src="/files/LKVrqY8hGE1X1E4Thnet" alt=""><figcaption></figcaption></figure>

## How to Connect to your instance (Windows)

1. For Windows, we are going to use PuTTY. You can download one from [here](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
2. Open PuTTY and fill in the **HostName** with your instance's Public IPv4 DNS name

<figure><img src="/files/lGXNbCJM9Dp6HVuicF7A" alt=""><figcaption></figcaption></figure>

3. From the left hand side bar of PuTTY Configuration, expand **SSH** and click on **Auth**. Click Browse and select the `.ppk` file you downloaded earlier.

<figure><img src="/files/yyQumDXksKemO4M9CMQz" alt="" width="296"><figcaption></figcaption></figure>

4. Click **Open** and **Accept** the pop up message

<figure><img src="/files/z9DbpSrnyxCczqxHgeEN" alt="" width="375"><figcaption></figcaption></figure>

5. Then login as `ec2-user`

<figure><img src="/files/LSqEpQecCRAJ9vFDgpuM" alt="" width="375"><figcaption></figcaption></figure>

6. Now you are connected to the EC2 instance

## How to Connect to your instance (Mac and Linux)

1. Open the Terminal application on your Mac/Linux.
2. *(Optional)* Set the permissions of the private key file to restrict access to it:

```bash
chmod 400 /path/to/mykey.pem
```

3. Use the `ssh` command to connect to your EC2 instance, specifying the username (`ec2-user`), Public IPv4 DNS, and the path to the `.pem` file.

```bash
ssh -i /Users/username/Documents/mykey.pem ec2-user@ec2-123-45-678-910.compute-1.amazonaws.com
```

4. Press Enter, and if everything is configured correctly, you should successfully establish an SSH connection to your EC2 instance

## Install Docker

1. Apply pending updates using the yum command:

```bash
sudo yum update
```

2. Search for Docker package:

```bash
sudo yum search docker
```

3. Get version information:

```bash
sudo yum info docker
```

4. Install docker, run:

```bash
sudo yum install docker
```

5. Add group membership for the default ec2-user so you can run all docker commands without using the sudo command:

```bash
sudo usermod -a -G docker ec2-user
id ec2-user
newgrp docker
```

6. Install docker-compose:

```bash
sudo yum install docker-compose-plugin
```

7. Enable docker service at AMI boot time:

```bash
sudo systemctl enable docker.service
```

8. Start the Docker service:

```bash
sudo systemctl start docker.service
```

## Install Git

```bash
sudo yum install git -y
```

## Setup

1. Clone the repo

```bash
git clone https://github.com/FlowiseAI/Flowise.git
```

2. Cd into docker folder

```bash
cd Flowise && cd docker
```

3. Create a `.env` file. You can use your favourite editor. I'll use `nano`

```bash
nano .env
```

<figure><img src="/files/AWva9WmidFKEoKtvudYH" alt="" width="375"><figcaption></figcaption></figure>

4. Specify the env variables:

```sh
PORT=3000
DATABASE_PATH=/root/.flowise
SECRETKEY_PATH=/root/.flowise
LOG_PATH=/root/.flowise/logs
BLOB_STORAGE_PATH=/root/.flowise/storage
```

5. Then press `Ctrl + X` to Exit, and `Y` to save the file
6. Run docker compose

```bash
docker compose up -d
```

7. Your application is now ready at your Public IPv4 DNS on port 3000:

```
http://ec2-123-456-789.compute-1.amazonaws.com:3000
```

8. You can bring the app down by:

```bash
docker compose stop
```

9. You can pull from latest image by:

```bash
docker pull flowiseai/flowise
```

Alternatively:

```bash
docker-compose pull
docker-compose up --build -d
```

## Using NGINX

If you want to get rid of the :3000 on the url and have a custom domain, you can use NGINX to reverse proxy port 80 to 3000 So user will be able to open the app using your domain. Example: `http://yourdomain.com`.

1. ```bash
   sudo yum install nginx
   ```
2. ```bash
   nginx -v
   ```
3. <pre class="language-bash"><code class="lang-bash"><strong>sudo systemctl start nginx
   </strong></code></pre>
4. <pre class="language-bash"><code class="lang-bash"><strong>sudo nano /etc/nginx/conf.d/flowise.conf
   </strong></code></pre>
5. Copy paste the following and change to your domain:

```shell
server {
    listen 80;
    listen [::]:80;
    server_name yourdomain.com; #Example: demo.flowiseai.com
    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_cache_bypass $http_upgrade;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
```

press `Ctrl + X` to Exit, and `Y` to save the file

6. ```bash
   sudo systemctl restart nginx
   ```
7. Go to your DNS provider, and add a new A record. Name will be your domain name, and value will be the Public IPv4 address from EC2 instance

<figure><img src="/files/yYWOFBngC5Y46rDmbwuw" alt="" width="367"><figcaption></figcaption></figure>

6. You should now be able to open the app: `http://yourdomain.com`.

### Install Certbot to have HTTPS

If you like your app to have `https://yourdomain.com`. Here is how:

1. For installing Certbot and enabling HTTPS on NGINX, we will rely on Python. So, first of all, let's set up a virtual environment:

```bash
sudo python3 -m venv /opt/certbot/
sudo /opt/certbot/bin/pip install --upgrade pip
```

2. Afterwards, run this command to install Certbot:

```bash
sudo /opt/certbot/bin/pip install certbot certbot-nginx
```

3. Now, execute the following command to ensure that the `certbot` command can be run:

```bash
sudo ln -s /opt/certbot/bin/certbot /usr/bin/certbot
```

4. Finally, run the following command to obtain a certificate and let Certbot automatically modify the NGINX configuration, enabling HTTPS:

```bash
sudo certbot --nginx
```

5. After following the certificate generation wizard, we will be able to access our EC2 instance via HTTPS using the address `https://yourdomain.com`

## Set up automatic renewal

To enable Certbot to automatically renew the certificates, it is sufficient to add a cron job by running the following command:

```bash
echo "0 0,12 * * * root /opt/certbot/bin/python -c 'import random; import time; time.sleep(random.random() * 3600)' && sudo certbot renew -q" | sudo tee -a /etc/crontab > /dev/null
```

## Congratulations!

You have successfully setup Flowise apps on EC2 instance with SSL certificate on your domain[🥳](https://emojipedia.org/partying-face/)


# Azure

Learn how to deploy Flowise on Azure

***

## Flowise as Azure App Service with Postgres: Using Terraform

### Prerequisites

1. **Azure Account**: Ensure you have an Azure account with an active subscription. If you do not have one, sign up at [Azure Portal](https://portal.azure.com/).
2. **Terraform**: Install Terraform CLI on your machine. Download it from [Terraform's website](https://www.terraform.io/downloads.html).
3. **Azure CLI**: Install Azure CLI. Instructions can be found on the [Azure CLI documentation page](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli).

### Setting Up Your Environment

1. **Login to Azure**: Open your terminal or command prompt and login to Azure CLI using:

```bash
az login --tenant <Your Subscription ID> --use-device-code 
```

Follow the prompts to complete the login process.

2. **Set Subscription**: After logging in, set the Azure subscription using:

```bash
az account set --subscription <Your Subscription ID>
```

3. **Initialize Terraform**:

Create a `terraform.tfvars` file in your Terraform project directory, if it's not already there, and add the following content:

```hcl
subscription_name = "subscrpiton_name"
subscription_id = "subscription id"
project_name = "webapp_name"
db_username = "PostgresUserName"
db_password = "strongPostgresPassword"
flowise_secretkey_overwrite = "longandStrongSecretKey"
webapp_ip_rules = [
  {
    name = "AllowedIP"
    ip_address = "X.X.X.X/32"
    headers = null
    virtual_network_subnet_id = null
    subnet_id = null
    service_tag = null
    priority = 300
    action = "Allow"
  }
]
postgres_ip_rules = {
  "ValbyOfficeIP" = "X.X.X.X"
  // Add more key-value pairs as needed
}
source_image = "flowiseai/flowise:latest"
tagged_image = "flow:v1"
```

Replace the placeholders with actual values for your setup.

The file tree structure is as follows:

```
flow
├── database.tf
├── main.tf
├── network.tf
├── output.tf
├── providers.tf
├── terraform.tfvars
├── terraform.tfvars.example
├── variables.tf
├── webapp.tf
├── .gitignore // ignore your .tfvars and .lock.hcf, .terraform

```

Each `.tf` file in the Terraform configuration likely contains a different aspect of the infrastructure as code:

<details>

<summary>`database.tf` would define the configuration for the Postgres database.</summary>

```yaml

// database.tf

// Database instance
resource "azurerm_postgresql_flexible_server" "postgres" {
  name                         = "postgresql-${var.project_name}"
  location                     = azurerm_resource_group.rg.location
  resource_group_name          = azurerm_resource_group.rg.name
  sku_name                     = "GP_Standard_D2s_v3"
  storage_mb                   = 32768
  version                      = "11"
  delegated_subnet_id          = azurerm_subnet.dbsubnet.id
  private_dns_zone_id          = azurerm_private_dns_zone.postgres.id
  backup_retention_days        = 7
  geo_redundant_backup_enabled = false
  auto_grow_enabled            = false
  administrator_login          = var.db_username
  administrator_password       = var.db_password
  zone                         = "2"

  lifecycle {
    prevent_destroy = false
  }
}

// Firewall
resource "azurerm_postgresql_flexible_server_firewall_rule" "pg_firewall" {
  for_each         = var.postgres_ip_rules
  name             = each.key
  server_id        = azurerm_postgresql_flexible_server.postgres.id
  start_ip_address = each.value
  end_ip_address   = each.value
}

// Database
resource "azurerm_postgresql_flexible_server_database" "production" {
  name      = "production"
  server_id = azurerm_postgresql_flexible_server.postgres.id
  charset   = "UTF8"
  collation = "en_US.utf8"

  # prevent the possibility of accidental data loss
  lifecycle {
    prevent_destroy = false
  }
}

// Transport off
resource "azurerm_postgresql_flexible_server_configuration" "postgres_config" {
  name      = "require_secure_transport"
  server_id = azurerm_postgresql_flexible_server.postgres.id
  value     = "off"
}
```

</details>

<details>

<summary>`main.tf` could be the main configuration file that may include the Azure provider configuration and defines the Azure resource group.</summary>

```yaml
// main.tf
resource "random_string" "resource_code" {
  length  = 5
  special = false
  upper   = false
}

// resource group
resource "azurerm_resource_group" "rg" {
  location = var.resource_group_location
  name     = "rg-${var.project_name}"
}

// Storage Account
resource "azurerm_storage_account" "sa" {
  name                     = "${var.subscription_name}${random_string.resource_code.result}"
  resource_group_name      = azurerm_resource_group.rg.name
  location                 = azurerm_resource_group.rg.location
  account_tier             = "Standard"
  account_replication_type = "LRS"

  blob_properties {
    versioning_enabled = true
  }

}

// File share
resource "azurerm_storage_share" "flowise-share" {
  name                 = "flowise"
  storage_account_name = azurerm_storage_account.sa.name
  quota                = 50
}

```

</details>

<details>

<summary>`network.tf` would include networking resources such as virtual networks, subnets, and network security groups.</summary>

```yaml
// network.tf

// Vnet
resource "azurerm_virtual_network" "vnet" {
  name                = "vn-${var.project_name}"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  address_space       = ["10.3.0.0/16"]
}

resource "azurerm_subnet" "dbsubnet" {
  name                                      = "db-subnet-${var.project_name}"
  resource_group_name                       = azurerm_resource_group.rg.name
  virtual_network_name                      = azurerm_virtual_network.vnet.name
  address_prefixes                          = ["10.3.1.0/24"]
  private_endpoint_network_policies_enabled = true
  delegation {
    name = "delegation"
    service_delegation {
      name = "Microsoft.DBforPostgreSQL/flexibleServers"
    }
  }
  lifecycle {
    ignore_changes = [
      service_endpoints,
      delegation
    ]
  }
}

resource "azurerm_subnet" "webappsubnet" {

  name                 = "web-app-subnet-${var.project_name}"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = ["10.3.8.0/24"]

  delegation {
    name = "delegation"
    service_delegation {
      name = "Microsoft.Web/serverFarms"
    }
  }
  lifecycle {
    ignore_changes = [
      delegation
    ]
  }
}

resource "azurerm_private_dns_zone" "postgres" {
  name                = "private.postgres.database.azure.com"
  resource_group_name = azurerm_resource_group.rg.name
}

resource "azurerm_private_dns_zone_virtual_network_link" "postgres" {
  name                  = "private-postgres-vnet-link"
  resource_group_name   = azurerm_resource_group.rg.name
  private_dns_zone_name = azurerm_private_dns_zone.postgres.name
  virtual_network_id    = azurerm_virtual_network.vnet.id
}

```

</details>

<details>

<summary>`providers.tf` would define the Terraform providers, such as Azure.</summary>

```yaml
// providers.tf
terraform {
  required_version = ">=0.12"

  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=3.87.0"
    }
    random = {
      source  = "hashicorp/random"
      version = "~>3.0"
    }
  }
}

provider "azurerm" {
  subscription_id = var.subscription_id
  features {}
}
```

</details>

<details>

<summary>`variables.tf` would declare variables used across all `.tf` files.</summary>

```yaml
// variables.tf
variable "resource_group_location" {
  default     = "westeurope"
  description = "Location of the resource group."
}

variable "container_rg_name" {
  default     = "acrllm"
  description = "Name of container regrestry."
}

variable "subscription_id" {
  type        = string
  sensitive   = true
  description = "Service Subscription ID"
}

variable "subscription_name" {
  type        = string
  description = "Service Subscription Name"
}


variable "project_name" {
  type        = string
  description = "Project Name"
}

variable "db_username" {
  type        = string
  description = "DB User Name"
}

variable "db_password" {
  type        = string
  sensitive   = true
  description = "DB Password"
}

variable "flowise_secretkey_overwrite" {
  type        = string
  sensitive   = true
  description = "Flowise secret key"
}

variable "webapp_ip_rules" {
  type = list(object({
    name                      = string
    ip_address                = string
    headers                   = string
    virtual_network_subnet_id = string
    subnet_id                 = string
    service_tag               = string
    priority                  = number
    action                    = string
  }))
}

variable "postgres_ip_rules" {
  description = "A map of IP addresses and their corresponding names for firewall rules"
  type        = map(string)
  default     = {}
}

variable "flowise_image" {
  type        = string
  description = "Flowise image from Docker Hub"
}

variable "tagged_image" {
  type        = string
  description = "Tag for flowise image version"
}
```

</details>

<details>

<summary>`webapp.tf` Azure App Services that includes a service plan and linux web app</summary>

```yaml
// webapp.tf
#Create the Linux App Service Plan
resource "azurerm_service_plan" "webappsp" {
  name                = "asp${var.project_name}"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  os_type             = "Linux"
  sku_name            = "P3v3"
}

resource "azurerm_linux_web_app" "webapp" {
  name                = var.project_name
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  service_plan_id     = azurerm_service_plan.webappsp.id

  app_settings = {
    DOCKER_ENABLE_CI                    = true
    WEBSITES_CONTAINER_START_TIME_LIMIT = 1800
    WEBSITES_ENABLE_APP_SERVICE_STORAGE = false
    DATABASE_TYPE                       = "postgres"
    DATABASE_HOST                       = azurerm_postgresql_flexible_server.postgres.fqdn
    DATABASE_NAME                       = azurerm_postgresql_flexible_server_database.production.name
    DATABASE_USER                       = azurerm_postgresql_flexible_server.postgres.administrator_login
    DATABASE_PASSWORD                   = azurerm_postgresql_flexible_server.postgres.administrator_password
    DATABASE_PORT                       = 5432
    FLOWISE_SECRETKEY_OVERWRITE         = var.flowise_secretkey_overwrite
    PORT                                = 3000
    SECRETKEY_PATH                      = "/root"
    DOCKER_IMAGE_TAG                    = var.tagged_image
  }

  storage_account {
    name         = "${var.project_name}_mount"
    access_key   = azurerm_storage_account.sa.primary_access_key
    account_name = azurerm_storage_account.sa.name
    share_name   = azurerm_storage_share.flowise-share.name
    type         = "AzureFiles"
    mount_path   = "/root"
  }


  https_only = true

  site_config {
    always_on              = true
    vnet_route_all_enabled = true
    dynamic "ip_restriction" {
      for_each = var.webapp_ip_rules
      content {
        name       = ip_restriction.value.name
        ip_address = ip_restriction.value.ip_address
      }
    }
    application_stack {
      docker_image_name        = var.flowise_image
      docker_registry_url      = "https://${azurerm_container_registry.acr.login_server}"
      docker_registry_username = azurerm_container_registry.acr.admin_username
      docker_registry_password = azurerm_container_registry.acr.admin_password
    }
  }

  logs {
    http_logs {
      file_system {
        retention_in_days = 7
        retention_in_mb   = 35
      }

    }
  }

  identity {
    type = "SystemAssigned"
  }

  lifecycle {
    create_before_destroy = false

    ignore_changes = [
      virtual_network_subnet_id
    ]
  }

}

resource "azurerm_app_service_virtual_network_swift_connection" "webappvnetintegrationconnection" {
  app_service_id = azurerm_linux_web_app.webapp.id
  subnet_id      = azurerm_subnet.webappsubnet.id

  depends_on = [azurerm_linux_web_app.webapp, azurerm_subnet.webappsubnet]
}

```

</details>

Note: The `.terraform` directory is created by Terraform when initializing a project (`terraform init`) and it contains the plugins and binary files needed for Terraform to run. The `.terraform.lock.hcl` file is used to record the exact provider versions that are being used to ensure consistent installs across different machines.

Navigate to your Terraform project directory and run:

```bash
terraform init
```

This will initialize Terraform and download the required providers.

### Configuring Terraform Variables

### Deploying with Terraform

1. **Plan the Deployment**: Run the Terraform plan command to see what resources will be created:

   ```bash
   terraform plan
   ```
2. **Apply the Deployment**: If you are satisfied with the plan, apply the changes:

   ```bash
   terraform apply
   ```

   Confirm the action when prompted, and Terraform will begin creating the resources.
3. **Verify the Deployment**: Once Terraform has completed, it will output any defined outputs such as IP addresses or domain names. Verify that the resources are correctly deployed in your Azure Portal.

***

## Azure Continer Instance: Using Azure Portal UI or Azure CLI

### Prerequisites

1. *(Optional)* [Install Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) if you'd like to follow the cli based commands

## Create a Container Instance without Persistent Storage

Without persistent storage your data is kept in memory. This means that on a container restart, all the data that you stored will disappear.

### In Portal

1. Search for Container Instances in Marketplace and click Create:

<figure><img src="/files/KKUg9GeQy77TXfwgRgj8" alt=""><figcaption><p>Container Instances entry in Azure's Marketplace</p></figcaption></figure>

2. Select or create a Resource group, Container name, Region, Image source `Other registry`, Image type, Image `flowiseai/flowise`, OS type and Size. Then click "Next: Networking" to configure Flowise ports:

<figure><img src="/files/0K5KvXmhf9GhxDoohVAB" alt=""><figcaption><p>First page in the Container Instance create wizard</p></figcaption></figure>

3. Add a new port `3000 (TCP)` next to the default `80 (TCP)`. Then Select "Next: Advanced":

<figure><img src="/files/jYbm3ER4y9TowH05Ymwu" alt=""><figcaption><p>Second page in the Container Instance create wizard. It asks for netowrking type and ports.</p></figcaption></figure>

4. Set Restart policy to `On failure`. Add Command override `["/bin/sh", "-c", "flowise start"]`. Finally click "Review + create":

<figure><img src="/files/Bs2mKODPFqPpDVQVZuar" alt=""><figcaption><p>Third page in the Container Instance create wizard. It asks for restart policy, environment variables and command that runs on container start.</p></figcaption></figure>

5. Review final settings and click "Create":

<figure><img src="/files/WeQVmK6IBXjlFKE8nXDS" alt=""><figcaption><p>Final review and create page for a Container Instance.</p></figcaption></figure>

6. Once creation is completed, click on "Go to resource"

<figure><img src="/files/bibb1y0NJ9yJ13o9VCIq" alt=""><figcaption><p>Resource creation result page in Azure.</p></figcaption></figure>

7. Visit your Flowise instance by copying IP address and adding :3000 as a port:

<figure><img src="/files/HFQnxBIPInQDLjTPa1Wx" alt=""><figcaption><p>Container Instance overview page</p></figcaption></figure>

<figure><img src="/files/9CVkbl3B6K9veE0jdTL0" alt=""><figcaption><p>Flowise application deployed as Container Instance</p></figcaption></figure>

### Create using Azure CLI

1. Create a resource group (if you don't already have one)

```bash
az group create --name flowise-rg --location "West US"
```

2. Create a Container Instance

```bash
az container create -g flowise-rg \
	--name flowise \
	--image flowiseai/flowise \
	--command-line "/bin/sh -c 'flowise start'" \
	--ip-address public \
	--ports 80 3000 \
	--restart-policy OnFailure
```

3. Visit the IP address (including port :3000) printed from the output of the above command.

## Create a Container Instance with Persistent Storage

The creation of a Container Instance with persistent storage is only possible using CLI:

1. Create a resource group (if you don't already have one)

```bash
az group create --name flowise-rg --location "West US"
```

2. Create the Storage Account resource (or use existing one) inside above resource group. You can check how to do it [here](https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-portal?tabs=azure-portal).
3. Inside Azure Storage create new File share. You can check how to do it [here](https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-portal?tabs=azure-portal).
4. Create a Container Instance

```bash
az container create -g flowise-rg \
	--name flowise \
	--image flowiseai/flowise \
	--command-line "/bin/sh -c 'flowise start'" \
	--environment-variables DATABASE_PATH=/opt/flowise/.flowise SECRETKEY_PATH=/opt/flowise/.flowise LOG_PATH=/opt/flowise/.flowise/logs BLOB_STORAGE_PATH=/opt/flowise/.flowise/storage \
	--ip-address public \
	--ports 80 3000 \
	--restart-policy OnFailure \
	--azure-file-volume-share-name here goes the name of your File share \
	--azure-file-volume-account-name here goes the name of your Storage Account \
	--azure-file-volume-account-key here goes the access key to your Storage Account \
	--azure-file-volume-mount-path /opt/flowise/.flowise
```

5. Visit the IP address (including port :3000) printed from the output of the above command.
6. From now on your data will be stored in an SQLite database which you can find in your File share.

Watch video tutorial on deploying to Azure Container Instance:

{% embed url="<https://www.youtube.com/watch?v=yDebxDfn2yk>" %}


# Digital Ocean

Learn how to deploy Flowise on Digital Ocean

***

## Create Droplet

In this section, we are going to create a Droplet. For more information, refer to [official guide](https://docs.digitalocean.com/products/droplets/quickstart/).

1. First, Click **Droplets** from the dropdown

<figure><img src="/files/W5QoZPVhZeOMz00eynbI" alt=""><figcaption></figcaption></figure>

2. Select Data Region and a Basic $6/mo Droplet type

<figure><img src="/files/QKTJFhd9fag5NRZTExiZ" alt=""><figcaption></figcaption></figure>

3. Select Authentication Method. In this example, we are going to use Password

<figure><img src="/files/8K13jNOdZskAhXS3uF8w" alt=""><figcaption></figcaption></figure>

4. After a while you should be able to see your droplet created successfully

<figure><img src="/files/9HMmKFfulZBtrLt97sHp" alt=""><figcaption></figcaption></figure>

## How to Connect to your Droplet

For Windows follow this [guide](https://docs.digitalocean.com/products/droplets/how-to/connect-with-ssh/putty/).

For Mac/Linux, follow this [guide](https://docs.digitalocean.com/products/droplets/how-to/connect-with-ssh/openssh/).

## Install Docker

1. ```
   curl -fsSL https://get.docker.com -o get-docker.sh
   ```
2. ```
   sudo sh get-docker.sh
   ```
3. Install docker-compose:

```
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
```

4. Set permission:

```
sudo chmod +x /usr/local/bin/docker-compose
```

## Setup

1. Clone the repo

```
git clone https://github.com/FlowiseAI/Flowise.git
```

2. Cd into docker folder

```bash
cd Flowise && cd docker
```

3. Create a `.env` file. You can use your favourite editor. I'll use `nano`

```bash
nano .env
```

<figure><img src="/files/vBHQz2KrKkvbTvCSEEkN" alt="" width="375"><figcaption></figcaption></figure>

4. Specify the env variables:

```sh
PORT=3000
DATABASE_PATH=/root/.flowise
SECRETKEY_PATH=/root/.flowise
LOG_PATH=/root/.flowise/logs
BLOB_STORAGE_PATH=/root/.flowise/storage
```

5. Then press `Ctrl + X` to Exit, and `Y` to save the file
6. Run docker compose

```bash
docker compose up -d
```

7. You can then view the app: "Your Public IPv4 DNS":3000. Example: `176.63.19.226:3000`
8. You can bring the app down by:

```bash
docker compose stop
```

9. You can pull from latest image by:

```bash
docker pull flowiseai/flowise
```

## Adding Reverse Proxy & SSL

A reverse proxy is the recommended method to expose an application server to the internet. It will let us connect to our droplet using a URL alone instead of the server IP and port number. This provides security benefits in isolating the application server from direct internet access, the ability to centralize firewall protection, a minimized attack plane for common threats such as denial of service attacks, and most importantly for our purposes, the ability to terminate SSL/TLS encryption in a single place.

> A lack of SSL on your Droplet will cause the embeddable widget and API endpoints to be inaccessible in modern browsers. This is because browsers have begun to deprecate HTTP in favor of HTTPS, and block HTTP requests from pages loaded over HTTPS.

### Step 1 — Installing Nginx

1. Nginx is available for installation with apt through the default repositories. Update your repository index, then install Nginx:

```bash
sudo apt update
sudo apt install nginx
```

> Press Y to confirm the installation. If you are asked to restart services, press ENTER to accept the defaults.

2. You need to allow access to Nginx through your firewall. Having set up your server according to the initial server prerequisites, add the following rule with ufw:

```bash
sudo ufw allow 'Nginx HTTP'
```

3. Now you can verify that Nginx is running:

```bash
systemctl status nginx
```

Output:

```bash
● nginx.service - A high performance web server and a reverse proxy server
     Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-08-29 06:52:46 UTC; 39min ago
       Docs: man:nginx(8)
   Main PID: 9919 (nginx)
      Tasks: 2 (limit: 2327)
     Memory: 2.9M
        CPU: 50ms
     CGroup: /system.slice/nginx.service
             ├─9919 "nginx: master process /usr/sbin/nginx -g daemon on; master_process on;"
             └─9920 "nginx: worker process
```

Next you will add a custom server block with your domain and app server proxy.

### Step 2 — Configuring your Server Block + DNS Record

It is recommended practice to create a custom configuration file for your new server block additions, instead of editing the default configuration directly.

1. Create and open a new Nginx configuration file using nano or your preferred text editor:

```bash
sudo nano /etc/nginx/sites-available/your_domain
```

2. Insert the following into your new file, making sure to replace `your_domain` with your own domain name:

```
server {
    listen 80;
    listen [::]:80;
    server_name your_domain; #Example: demo.flowiseai.com
    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_cache_bypass $http_upgrade;
    }
}
```

3. Save and exit, with `nano` you can do this by hitting `CTRL+O` then `CTRL+X`.
4. Next, enable this configuration file by creating a link from it to the sites-enabled directory that Nginx reads at startup, making sure again to replace `your_domain` with your own domain name::

```bash
sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/
```

5. You can now test your configuration file for syntax errors:

```bash
sudo nginx -t
```

6. With no problems reported, restart Nginx to apply your changes:

```bash
sudo systemctl restart nginx
```

7. Go to your DNS provider, and add a new A record. Name will be your domain name, and value will be the Public IPv4 address from your droplet

<figure><img src="/files/yYWOFBngC5Y46rDmbwuw" alt="" width="367"><figcaption></figcaption></figure>

Nginx is now configured as a reverse proxy for your application server. You should now be able to open the app: <http://yourdomain.com>.

### Step 3 — Installing Certbot for HTTPS (SSL)

If you'd like to add a secure `https` connection to your Droplet like <https://yourdomain.com>, you'll need to do the following:

1. For installing Certbot and enabling HTTPS on NGINX, we will rely on Python. So, first of all, let's set up a virtual environment:

```bash
apt install python3.10-venv
sudo python3 -m venv /opt/certbot/
sudo /opt/certbot/bin/pip install --upgrade pip
```

2. Afterwards, run this command to install Certbot:

```bash
sudo /opt/certbot/bin/pip install certbot certbot-nginx
```

3. Now, execute the following command to ensure that the `certbot` command can be run:

```bash
sudo ln -s /opt/certbot/bin/certbot /usr/bin/certbot
```

4. Finally, run the following command to obtain a certificate and let Certbot automatically modify the NGINX configuration, enabling HTTPS:

```bash
sudo certbot --nginx
```

5. After following the certificate generation wizard, we will be able to access our Droplet via HTTPS using the address <https://yourdomain.com>

### Set up automatic renewal

To enable Certbot to automatically renew the certificates, it is sufficient to add a cron job by running the following command:

```bash
echo "0 0,12 * * * root /opt/certbot/bin/python -c 'import random; import time; time.sleep(random.random() * 3600)' && sudo certbot renew -q" | sudo tee -a /etc/crontab > /dev/null
```

## Congratulations!

You have successfully setup Flowise on your Droplet, with SSL certificate on your domain [🥳](https://emojipedia.org/partying-face/)

## Steps to update Flowise on Digital Ocean

1. Navigate to the directory you installed flowise in

```bash
cd Flowise/docker
```

2. Stop and remove docker image

Note: This will not delete your flows as the database is stored in a separate folder

```bash
sudo docker compose stop
sudo docker compose rm
```

3. Pull the latest Flowise Image

You can check the latest version release [here](https://github.com/FlowiseAI/Flowise/releases)

```bash
docker pull flowiseai/flowise
```

4. Start the docker

```bash
docker compose up -d
```


# GCP

Learn how to deploy Flowise on GCP

***

## Prerequisites

1. Notedown your Google Cloud \[ProjectId]
2. Install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
3. Install the [Google Cloud CLI](https://cloud.google.com/sdk/docs/install-sdk)
4. Install [Docker Desktop](https://docs.docker.com/desktop/)

## Setup Kubernetes Cluster

1. Create a Kubernetes Cluster if you don't have one.

<figure><img src="/files/uPwAnu0EcbrQEFbEad6V" alt=""><figcaption><p>Click `Clusters` to create one.</p></figcaption></figure>

2. Name the Cluster, choose the right resource location, use `Autopilot` mode and keep all other default configs.
3. Once the Cluster is created, Click the 'Connect' menu from the actions menu

<figure><img src="/files/ouNmuYbEobkTvfGZoFLB" alt=""><figcaption></figcaption></figure>

4. Copy the command and paste into your terminal and hit enter to connect your cluster.
5. Run the below command and select correct context name, which looks like `gke_[ProjectId]_[DataCenter]_[ClusterName]`

```
kubectl config get-contexts
```

6. Set the current context

```
kubectl config use-context gke_[ProjectId]_[DataCenter]_[ClusterName]
```

## Build and Push the Docker image

Run the following commands to build and push the Docker image to GCP Container Registry.

1. Clone the Flowise

```
git clone https://github.com/FlowiseAI/Flowise.git
```

2. Build the Flowise

```
cd Flowise
pnpm install
pnpm build
```

3. Update the `Dockerfile` file a little.

> Specify the platform of nodejs
>
> ```
> FROM --platform=linux/amd64 node:18-alpine
> ```
>
> Add python3, make and g++ to install
>
> ```
> RUN apk add --no-cache python3 make g++
> ```

3. Build as Docker image, make sure the Docker desktop app is running

```
docker build -t gcr.io/[ProjectId]/flowise:dev .
```

4. Push the Docker image to GCP container registry.

```
docker push gcr.io/[ProjectId]/flowise:dev
```

## Deployment to GCP

1. Create a `yamls` root folder in the project.
2. Add the `deployment.yaml` file into that folder.

```
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: flowise
  labels:
    app: flowise
spec:
  selector:
    matchLabels:
      app: flowise
  replicas: 1
  template:
    metadata:
      labels:
        app: flowise
    spec:
      containers:
      - name: flowise
        image: gcr.io/[ProjectID]/flowise:dev
        imagePullPolicy: Always
        resources: 
          requests:
            cpu: "1"
            memory: "1Gi"
```

3. Add the `service.yaml` file into that folder.

```
# service.yaml
apiVersion: "v1"
kind: "Service"
metadata:
  name: "flowise-service"
  namespace: "default"
  labels:
    app: "flowise"
spec:
  ports:
  - protocol: "TCP"
    port: 80
    targetPort: 3000
  selector:
    app: "flowise"
  type: "LoadBalancer"

```

It will be look like below.

<figure><img src="/files/9Qh1qYXyLxmOuQA0eKZu" alt=""><figcaption></figcaption></figure>

4. Deploy the yaml files by running following commands.

```
kubectl apply -f yamls/deployment.yaml
kubectl apply -f yamls/service.yaml
```

5. Go to `Workloads` in the GCP, you can see your pod is running.

<figure><img src="/files/2d1PaaupLwz6wgAHaP8d" alt=""><figcaption></figcaption></figure>

6. Go to `Services & Ingress`, you can click the `Endpoint` where the Flowise is hosted.

<figure><img src="/files/gatbCg0f0rXnCSVjT6he" alt=""><figcaption></figcaption></figure>

## Congratulations!

You have successfully hosted the Flowise apps on GCP [🥳](https://emojipedia.org/partying-face/)

## Timeout

By default, there is a 30 seconds timeout assigned to the proxy by GCP. This caused issue when the response is taking longer than 30 seconds threshold to return. In order to fix this issue, make the following changes to YAML files:

Note: To set the timeout to be 10 minutes (for example) -- we specify 600 seconds below.

1. Create a `backendconfig.yaml` file with the following content:

```yaml
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: flowise-backendconfig
  namespace: your-namespace
spec:
  timeoutSec: 600
```

2. Issue: `kubectl apply -f backendconfig.yaml`
3. Update your `service.yaml` file with the following reference to the `BackendConfig`:

```yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    cloud.google.com/backend-config: '{"default": "flowise-backendconfig"}'
  name: flowise-service
  namespace: your-namespace
...
```

4. Issue: `kubectl apply -f service.yaml`


# Hugging Face

Learn how to deploy Flowise on Hugging Face

***

### Create a new space

1. Sign in to [Hugging Face](https://huggingface.co/login)
2. Start creating a [new Space](https://huggingface.co/new-space) with your preferred name.
3. Select **Docker** as **Space SDK** and choose **Blank** as the Docker template.
4. Select **CPU basic ∙ 2 vCPU ∙ 16GB ∙ FREE** as **Space hardware**.
5. Click **Create Space**.

### Set the environment variables

1. Go to **Settings** of your new space and find the **Variables and Secrets** section
2. Click on **New variable** and add the name as `PORT` with value `7860`
3. Click on **Save**
4. *(Optional)* Click on **New secret**
5. *(Optional)* Fill in with your environment variables, such as database credentials, file paths, etc. You can check for valid fields in the `.env.example` [here](https://github.com/FlowiseAI/Flowise/blob/main/docker/.env.example)

### Create a Dockerfile

1. At the files tab, click on button ***+ Add file*** and click on **Create a new file** (or Upload files if you prefer to)
2. Create a file called **Dockerfile** and paste the following:

```dockerfile
FROM node:20-alpine
USER root

# Arguments that can be passed at build time
ARG FLOWISE_PATH=/usr/local/lib/node_modules/flowise
ARG BASE_PATH=/root/.flowise
ARG DATABASE_PATH=$BASE_PATH
ARG SECRETKEY_PATH=$BASE_PATH
ARG LOG_PATH=$BASE_PATH/logs
ARG BLOB_STORAGE_PATH=$BASE_PATH/storage

# Install dependencies
RUN apk add --no-cache git python3 py3-pip make g++ build-base cairo-dev pango-dev chromium

ENV PUPPETEER_SKIP_DOWNLOAD=true
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser

# Install Flowise globally
RUN npm install -g flowise

# Configure Flowise directories using the ARG
RUN mkdir -p $LOG_PATH $FLOWISE_PATH/uploads && chmod -R 777 $LOG_PATH $FLOWISE_PATH

WORKDIR /data

CMD ["npx", "flowise", "start"]
```

3. Click on **Commit file to `main`** and it will start to build your app.

### Done 🎉

When the build finishes you can click on the **App** tab to see your app running.


# Railway

Learn how to deploy Flowise on Railway

***

1. Click the following prebuilt [template](https://railway.app/template/pn4G8S?referralCode=WVNPD9)
2. Click Deploy Now

<figure><img src="/files/t4432H7kSoD5sf0zy4QD" alt=""><figcaption></figcaption></figure>

3. Change to your preferred repository name and click Deploy

<figure><img src="/files/HXQXonlCz7RyJ0oSM09m" alt="" width="375"><figcaption></figcaption></figure>

4. If succeeds, you should be able to see a deployed URL

<figure><img src="/files/Suj1AwaFEvaPU2faD72V" alt=""><figcaption></figcaption></figure>

5. To add authorization, navigate to Variables tab and add:

* FLOWISE\_USERNAME
* FLOWISE\_PASSWORD

<figure><img src="/files/UXxUOLHQToNu7Dzmi88k" alt=""><figcaption></figcaption></figure>

6. There are list of env variables you can configure. Refer to [Environment Variables](/configuration/environment-variables)

That's it! You now have a deployed Flowise on Railway [🎉](https://emojipedia.org/party-popper/)[🎉](https://emojipedia.org/party-popper/)

## Persistent Volume

The default filesystem for services running on Railway is ephemeral. Flowise data isn’t persisted across deploys and restarts. To solve this issue, we can use [Railway Volume](https://docs.railway.app/reference/volumes).

To ease the steps, we have a Railway template with volume mounted: <https://railway.app/template/nEGbjR>

Just click Deploy and fill in the Env Variables like below:

* DATABASE\_PATH - `/opt/railway/.flowise`
* APIKEY\_PATH - `/opt/railway/.flowise`
* LOG\_PATH - `/opt/railway/.flowise/logs`
* SECRETKEY\_PATH - `/opt/railway/.flowise`
* BLOB\_STORAGE\_PATH - `/opt/railway/.flowise/storage`

<figure><img src="/files/jT3KK9x2dkVipoXOXQwE" alt="" width="420"><figcaption></figcaption></figure>

Now try creating a flow and save it in Flowise. Then try restarting service or redeploy, you should still be able to see the flow you have saved previously.


# Render

Learn how to deploy Flowise on Render

***

1. Fork [Flowise Official Repository](https://github.com/FlowiseAI/Flowise)
2. Visit your github profile to assure you have successfully made a fork
3. Sign in to [Render](https://dashboard.render.com)
4. Click **New +**

<figure><img src="/files/LvKz6EmzPJrIl5oVyyCP" alt="" width="563"><figcaption></figcaption></figure>

5. Select **Web Service**

<figure><img src="/files/8VFsniG0uiSWAIJdgOhx" alt=""><figcaption></figcaption></figure>

6. Connect Your GitHub Account
7. Select your forked Flowise repo and click **Connect**

<figure><img src="/files/ANAciUP0TnmDl7wN0B6u" alt="" width="563"><figcaption></figcaption></figure>

8. Fill in your preferred **Name** and **Region.**
9. Select `Docker` as your **Runtime**

<figure><img src="/files/LsSJx6NMLFma7xe3BgUw" alt=""><figcaption></figcaption></figure>

9. Select an **Instance**

<figure><img src="/files/UdPOIxk0zeYECqe7B6x8" alt=""><figcaption></figcaption></figure>

10. *(Optional)* Add app level authorization, click **Advanced** and add `Environment Variable`

* FLOWISE\_USERNAME
* FLOWISE\_PASSWORD

<figure><img src="/files/pQuWh7zaqZBVV6vY2VYe" alt=""><figcaption></figcaption></figure>

Add `NODE_VERSION` with value `18.18.1` as the node version to run the instance.

There are list of env variables you can configure. Refer to [Environment Variables](/configuration/environment-variables)

11. Click **Create Web Service**

<figure><img src="/files/slPgoUanWqnZFgDA7yTG" alt=""><figcaption></figcaption></figure>

12. Navigate to the deployed URL and that's it [🚀](https://emojipedia.org/rocket/)[🚀](https://emojipedia.org/rocket/)

<figure><img src="/files/keYVTVddVPcvwyArqA2T" alt=""><figcaption></figcaption></figure>

## Persistent Disk

The default filesystem for services running on Render is ephemeral. Flowise data isn’t persisted across deploys and restarts. To solve this issue, we can use [Render Disk](https://render.com/docs/disks).

1. On the left hand side bar, click **Disks**
2. Name your disk, and specify the **Mount Path** to `/opt/render/.flowise`

<figure><img src="/files/0omqILNuhxuFRNZC387h" alt=""><figcaption></figcaption></figure>

3. Click the **Environment** section, and add these new environment variables:

* HOST - `0.0.0.0`
* DATABASE\_PATH - `/opt/render/.flowise`
* APIKEY\_PATH - `/opt/render/.flowise`
* LOG\_PATH - `/opt/render/.flowise/logs`
* SECRETKEY\_PATH - `/opt/render/.flowise`
* BLOB\_STORAGE\_PATH - `/opt/render/.flowise/storage`

<figure><img src="/files/QVTgOezavDRUSoIPRe4S" alt=""><figcaption></figcaption></figure>

4. Click **Manual Deploy** then select **Clear build cache & deploy**

<figure><img src="/files/vxUVuAjlXi3wOhPjhd5V" alt=""><figcaption></figcaption></figure>

5. Now try creating a flow and save it in Flowise. Then try restarting service or redeploy, you should still be able to see the flow you have saved previously.

Watch how to deploy to Render

{% embed url="<https://youtu.be/Fxyc6-frgrI>" %}

{% embed url="<https://youtu.be/l-0NzOMeCco>" %}


# Replit

Learn how to deploy Flowise on Replit

***

1. Sign in to [Replit](https://replit.com/~)
2. Create a new **Repl**. Select **Node.js** as Template and fill in your preferred **Title**.

<figure><img src="/files/kG5CDFqe43VpfX8DYGau" alt="" width="551"><figcaption></figcaption></figure>

3. After a new Repl is created, on the left hand side bar, click Secret:

<figure><img src="/files/lPIpuaVitSJSlf2LnJEE" alt="" width="219"><figcaption></figcaption></figure>

4. Create 3 Secrets to skip Chromium download for Puppeteer and Playwright libraries.

<table><thead><tr><th width="403">Secrets</th><th>Value</th></tr></thead><tbody><tr><td>PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD</td><td>1</td></tr><tr><td>PUPPETEER_SKIP_DOWNLOAD</td><td>true</td></tr><tr><td>PUPPETEER_SKIP_CHROMIUM_DOWNLOAD</td><td>true</td></tr></tbody></table>

<figure><img src="/files/MfpXgx6aUmCCGdyfYZxO" alt="" width="535"><figcaption></figcaption></figure>

5. You can now switch to Shell tab

<figure><img src="/files/LYKTjTShJzz6u2Q2cqAS" alt="" width="539"><figcaption></figcaption></figure>

6. Type in `npm install -g flowise` into the Shell terminal window. If you are having error about incompatible node version, use the following command `yarn global add flowise --ignore-engines`

<figure><img src="/files/gF5aVWiT3eLpoiaAKCC1" alt="" width="530"><figcaption></figcaption></figure>

7. Then followed by `npx flowise start`

<figure><img src="/files/OzKO18FkU7mIIIxfbBDP" alt="" width="533"><figcaption></figcaption></figure>

8. You should now be able to see Flowise on Replit!

<figure><img src="/files/jm6vvXMukz90OgrkxMQs" alt="" width="545"><figcaption></figcaption></figure>

9. You will now see a login page. Simply login with the username and password you've set.

<figure><img src="/files/MYWvtvP7wn48RPajQ8lV" alt=""><figcaption></figcaption></figure>


# Sealos

Learn how to deploy Flowise on Sealos

***

1. Click the following prebuilt [template](https://template.sealos.io/deploy?templateName=flowise) or the button below.

[![Deploy on Sealos](https://sealos.io/Deploy-on-Sealos.svg)](https://template.sealos.io/deploy?templateName=flowise)

2. Add authorization
   * FLOWISE\_USERNAME
   * FLOWISE\_PASSWORD

<figure><img src="/files/Ulqt5ahBiNHQsNEKsG6J" alt=""><figcaption></figcaption></figure>

3. Click "Deploy Application" on the template page to start deployment.
4. Once deployment concludes, click "Details" to navigate to the application's details.

<figure><img src="/files/dS5ik1asxG6Zni9LGh0O" alt=""><figcaption></figcaption></figure>

5. Wait for the application's status to switch to running. Subsequently, click on the external link to open the application's Web interface directly through the external domain.

<figure><img src="/files/pfsY5g4iZN28EIhFSWCX" alt=""><figcaption></figcaption></figure>

## Persistent Volume

Click "Update" top-right on the app details page, then click "Advanced" -> "Add volume", Fill in the value of "mount path": `/root/.flowise`.

<figure><img src="/files/IjOwRa3RKXUkYJGIoiFJ" alt="" width="375"><figcaption></figcaption></figure>

To wrap up, click the "Deploy" button.

Now try creating a flow and save it in Flowise. Then try restarting service or redeploy, you should still be able to see the flow you have saved previously.


# Zeabur

Learn how to deploy Flowise on Zeabur

***

{% hint style="warning" %}
Please note that the following template made by Zeabur is outdated (from 2024-01-24).
{% endhint %}

1. Click the following prebuilt [template](https://zeabur.com/templates/2JYZTR) or the button below.

[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/templates/2JYZTR)

2. Click Deploy

<figure><img src="/files/EExgQAWrcsuYIkAMPkNC" alt="zeabur template"><figcaption></figcaption></figure>

3. Select your favorite region and continue

<figure><img src="/files/SpDUKJwhxMi5AfuTppZq" alt="select region"><figcaption></figcaption></figure>

4. You will be redirected to Zeabur's dashboard and you will see the deployment process

<figure><img src="/files/Ve6loHHfnjJF8Pt924Z5" alt="deployment process"><figcaption></figcaption></figure>

5. To add authorization, navigate to Variables tab and add:

* FLOWISE\_USERNAME
* FLOWISE\_PASSWORD

<figure><img src="/files/ROgcGYKocxFaug5PzYBZ" alt="authorization"><figcaption></figcaption></figure>

6. There are list of env variables you can configure. Refer to [Environment Variables](/configuration/environment-variables)

That's it! You now have a deployed Flowise on Zeabur [🎉](https://emojipedia.org/party-popper/)[🎉](https://emojipedia.org/party-popper/)

## Persistent Volume

Zeabur will automatically create a persistent volume for you so you don't have to worry about it.


# Environment Variables

Learn how to configure environment variables for Flowise

Flowise support different environment variables to configure your instance. You can specify the following variables in the `.env` file inside `packages/server` folder. Refer to [.env.example](https://github.com/FlowiseAI/Flowise/blob/main/packages/server/.env.example) file.

<table><thead><tr><th width="233">Variable</th><th width="219">Description</th><th width="104">Type</th><th>Default</th></tr></thead><tbody><tr><td>PORT</td><td>The HTTP port Flowise runs on</td><td>Number</td><td>3000</td></tr><tr><td>FLOWISE_FILE_SIZE_LIMIT</td><td>Maximum file size when uploading</td><td>String</td><td><code>50mb</code></td></tr><tr><td>NUMBER_OF_PROXIES</td><td>Rate Limit Proxy</td><td>Number</td><td></td></tr><tr><td>CORS_ORIGINS</td><td>The allowed origins for all cross-origin HTTP calls</td><td>String</td><td></td></tr><tr><td>IFRAME_ORIGINS</td><td>The allowed origins for iframe src embedding</td><td>String</td><td></td></tr><tr><td>SHOW_COMMUNITY_NODES</td><td>Display nodes that are created by community</td><td>Boolean: <code>true</code> or <code>false</code></td><td></td></tr><tr><td>DISABLED_NODES</td><td>Comma separated list of node names to disable</td><td>String</td><td></td></tr></tbody></table>

## For Database

| Variable           | Description                                                      | Type                                       | Default                  |
| ------------------ | ---------------------------------------------------------------- | ------------------------------------------ | ------------------------ |
| DATABASE\_TYPE     | Type of database to store the flowise data                       | Enum String: `sqlite`, `mysql`, `postgres` | `sqlite`                 |
| DATABASE\_PATH     | Location where database is saved (When DATABASE\_TYPE is sqlite) | String                                     | `your-home-dir/.flowise` |
| DATABASE\_HOST     | Host URL or IP address (When DATABASE\_TYPE is not sqlite)       | String                                     |                          |
| DATABASE\_PORT     | Database port (When DATABASE\_TYPE is not sqlite)                | String                                     |                          |
| DATABASE\_USER     | Database username (When DATABASE\_TYPE is not sqlite)            | String                                     |                          |
| DATABASE\_PASSWORD | Database password (When DATABASE\_TYPE is not sqlite)            | String                                     |                          |
| DATABASE\_NAME     | Database name (When DATABASE\_TYPE is not sqlite)                | String                                     |                          |
| DATABASE\_SSL      | Database SSL is required (When DATABASE\_TYPE is not sqlite)     | Boolean: `true` or `false`                 | `false`                  |

## For Storage

Flowise store the following files under a local path folder by default.

* Files uploaded on [Document Loaders](/integrations/langchain/document-loaders)/Document Store
* Image/Audio uploads from chat
* Images/Files from Assistant
* Files from [Vector Upsert API](broken://pages/F2AfRpI7qYixNiBWpmIe#vector-upsert-api)

User can specify `STORAGE_TYPE` to use AWS S3, Google Cloud Storage or local path

| Variable                               | Description                                                                      | Type                              | Default                          |
| -------------------------------------- | -------------------------------------------------------------------------------- | --------------------------------- | -------------------------------- |
| STORAGE\_TYPE                          | Type of storage for uploaded files. default is `local`                           | Enum String: `s3`, `gcs`, `local` | `local`                          |
| BLOB\_STORAGE\_PATH                    | Local folder path where uploaded files are stored when `STORAGE_TYPE` is `local` | String                            | `your-home-dir/.flowise/storage` |
| S3\_STORAGE\_BUCKET\_NAME              | Bucket name to hold the uploaded files when `STORAGE_TYPE` is `s3`               | String                            |                                  |
| S3\_STORAGE\_ACCESS\_KEY\_ID           | AWS Access Key                                                                   | String                            |                                  |
| S3\_STORAGE\_SECRET\_ACCESS\_KEY       | AWS Secret Key                                                                   | String                            |                                  |
| S3\_STORAGE\_REGION                    | Region for S3 bucket                                                             | String                            |                                  |
| S3\_ENDPOINT\_URL                      | Custom S3 endpoint (optional)                                                    | String                            |                                  |
| S3\_FORCE\_PATH\_STYLE                 | Force S3 path style (optional)                                                   | Boolean                           | false                            |
| GOOGLE\_CLOUD\_STORAGE\_CREDENTIAL     | Google Cloud Service Account Key                                                 | String                            |                                  |
| GOOGLE\_CLOUD\_STORAGE\_PROJ\_ID       | Google Cloud Project ID                                                          | String                            |                                  |
| GOOGLE\_CLOUD\_STORAGE\_BUCKET\_NAME   | Google Cloud Storage Bucket Name                                                 | String                            |                                  |
| GOOGLE\_CLOUD\_UNIFORM\_BUCKET\_ACCESS | Type of Access                                                                   | Boolean                           | true                             |

## For Debugging and Logs

| Variable   | Description                         | Type                                             |                                |
| ---------- | ----------------------------------- | ------------------------------------------------ | ------------------------------ |
| DEBUG      | Print logs from components          | Boolean                                          |                                |
| LOG\_PATH  | Location where log files are stored | String                                           | `Flowise/packages/server/logs` |
| LOG\_LEVEL | Different levels of logs            | Enum String: `error`, `info`, `verbose`, `debug` | `info`                         |

`DEBUG`: if set to true, will print logs to terminal/console:

<figure><img src="/files/MgShSJ6dtcABHSqtX2xR" alt=""><figcaption></figcaption></figure>

`LOG_LEVEL`: Different log levels for loggers to be saved. Can be `error`, `info`, `verbose`, or `debug.` By default it is set to `info,` only `logger.info` will be saved to the log files. If you want to have complete details, set to `debug`.

<figure><img src="/files/lIynQJyGboLfEdaGIjA7" alt=""><figcaption><p><strong>server-requests.log.jsonl - logs every request sent to Flowise</strong></p></figcaption></figure>

<figure><img src="/files/1veZxfCEY8sjw4Jcla5N" alt=""><figcaption><p><strong>server.log - logs general actions on Flowise</strong></p></figcaption></figure>

<figure><img src="/files/59o0tB5QR2aNJTD0T1f8" alt=""><figcaption><p><strong>server-error.log - logs error with stack trace</strong></p></figcaption></figure>

### Logs Streaming S3

When `STORAGE_TYPE` env variable is set to `s3` , logs will be automatically streamed and stored to S3. New log file will be created hourly, enabling easier debugging.

### Logs Streaming GCS

When `STORAGE_TYPE` env variable is set to `gcs` , logs will be automatically streamed to Google [Cloud Logging](https://cloud.google.com/logging?hl=en).

## For Credentials

Flowise store your third party API keys as encrypted credentials using an encryption key.

By default, a random encryption key will be generated when starting up the application and stored under a file path. This encryption key is then retrieved everytime to decrypt the credentials used within a chatflow. For example, your OpenAI API key, Pinecone API key, etc.

You can configure to use AWS Secret Manager to store the encryption key instead.

| Variable                      | Description                                           | Type                        | Default                   |
| ----------------------------- | ----------------------------------------------------- | --------------------------- | ------------------------- |
| SECRETKEY\_STORAGE\_TYPE      | How to store the encryption key                       | Enum String: `local`, `aws` | `local`                   |
| SECRETKEY\_PATH               | Local file path where encryption key is saved         | String                      | `Flowise/packages/server` |
| FLOWISE\_SECRETKEY\_OVERWRITE | Encryption key to be used instead of the existing key | String                      |                           |
| SECRETKEY\_AWS\_ACCESS\_KEY   |                                                       | String                      |                           |
| SECRETKEY\_AWS\_SECRET\_KEY   |                                                       | String                      |                           |
| SECRETKEY\_AWS\_REGION        |                                                       | String                      |                           |

For some reasons, sometimes encryption key might be re-generated or the stored path was changed, this will cause errors like - <mark style="color:red;">Credentials could not be decrypted.</mark>

To avoid this, you can set your own encryption key as `FLOWISE_SECRETKEY_OVERWRITE`, so that the same encryption key will be used everytime. There is no restriction on the format, you can set it as any text that you want, or the same as your `FLOWISE_PASSWORD`.

<figure><img src="/files/YnCXO8sNRLDVnZaw4U8j" alt=""><figcaption></figcaption></figure>

{% hint style="info" %}
Credential API Key returned from the UI is not the same length as your original Api Key that you have set. This is a fake prefix string that prevents network spoofing, that's why we are not returning the Api Key back to UI. However, the correct Api Key will be retrieved and used during your interaction with the chatflow.
{% endhint %}

## For Models

In some cases, you might want to use custom model on the existing Chat Model and LLM nodes, or restrict access to only certain models.

By default, Flowise pulls the model list from [here](https://github.com/FlowiseAI/Flowise/blob/main/packages/components/models.json). However user can create their own `models.json` file and specify the file path:

<table><thead><tr><th width="164">Variable</th><th width="196">Description</th><th width="78">Type</th><th>Default</th></tr></thead><tbody><tr><td>MODEL_LIST_CONFIG_JSON</td><td>Link to load list of models from your <code>models.json</code> config file</td><td>String</td><td><a href="https://raw.githubusercontent.com/FlowiseAI/Flowise/main/packages/components/models.json">https://raw.githubusercontent.com/FlowiseAI/Flowise/main/packages/components/models.json</a></td></tr></tbody></table>

## For Built-In and External Dependencies

There are certain nodes/features within Flowise that allow user to run Javascript code. For security reasons, by default it only allow certain dependencies. It's possible to lift that restriction for built-in and external modules by setting the following environment variables:

<table><thead><tr><th>Variable</th><th width="300.4444580078125">Description</th><th></th></tr></thead><tbody><tr><td>TOOL_FUNCTION_BUILTIN_DEP</td><td>NodeJS built-in modules to be used</td><td>String</td></tr><tr><td>TOOL_FUNCTION_EXTERNAL_DEP</td><td>External modules to be used</td><td>String</td></tr><tr><td>ALLOW_BUILTIN_DEP</td><td>Allow project dependencies to be used such as <code>cheerio</code>, <code>typeorm</code></td><td>Boolean</td></tr></tbody></table>

{% code title=".env" %}

```bash
# Allows usage of all builtin modules
TOOL_FUNCTION_BUILTIN_DEP=*

# Allows usage of only fs
TOOL_FUNCTION_BUILTIN_DEP=fs

# Allows usage of only crypto and fs
TOOL_FUNCTION_BUILTIN_DEP=crypto,fs

# Allow usage of external npm modules.
TOOL_FUNCTION_EXTERNAL_DEP=cheerio,typeorm

ALLOW_BUILTIN_DEP=true
```

{% endcode %}

### Using Built In Dependencies

{% hint style="warning" %}
Some built-in dependencies, such as Puppeteer, may introduce potential security vulnerabilities. It is recommended to analyze and assess these risks carefully before using them.
{% endhint %}

### NodeVM Execution Error: VMError: Cannot find module

If you are using library that is not allowed by default, you can either:

1. Allow all project's [libraries/dependencies](https://github.com/FlowiseAI/Flowise/blob/main/packages/components/src/utils.ts#L52): `ALLOW_BUILTIN_DEP=true`
2. (Recommended) Specifically allow certain libraries/dependencies: `TOOL_FUNCTION_EXTERNAL_DEP=cheerio,typeorm`

## Security Configuration

<table><thead><tr><th width="246.4444580078125">Variable</th><th width="180.4444580078125">Description</th><th width="192.666748046875">Options</th><th>Default</th></tr></thead><tbody><tr><td><code>HTTP_DENY_LIST</code></td><td>Blocks HTTP requests to specified URLs or domains in MCP servers</td><td>Comma-separated URLs/domains</td><td><em>(empty)</em></td></tr><tr><td><code>CUSTOM_MCP_SECURITY_CHECK</code></td><td>Enables comprehensive security validation for Custom MCP configurations</td><td><code>true</code> | <code>false</code></td><td><code>true</code></td></tr><tr><td><code>CUSTOM_MCP_PROTOCOL</code></td><td>Sets the default protocol for Custom MCP communication</td><td><code>stdio</code> | <code>sse</code></td><td><code>stdio</code></td></tr></tbody></table>

#### `CUSTOM_MCP_SECURITY_CHECK=true`

By default, this is enabled. When enabled, applies the following security validations:

* **Command Allowlist**: Only permits safe commands (`node`, `npx`, `python`, `python3`, `docker`)
* **Argument Validation**: Blocks dangerous file paths, directory traversal, and executable files
* **Injection Prevention**: Prevents shell metacharacters and command chaining
* **Environment Protection**: Blocks modification of critical environment variables (PATH, LD\_LIBRARY\_PATH)

#### `CUSTOM_MCP_PROTOCOL`

* **`stdio`**: Direct process communication (default, requires command execution)
* **`sse`**: Server-Sent Events over HTTP (recommended for production, more secure)

### Recommended Production Settings

```bash
# Enable security validation (default)
CUSTOM_MCP_SECURITY_CHECK=true

# Use SSE protocol for better security
CUSTOM_MCP_PROTOCOL=sse

# Block dangerous domains (example)
HTTP_DENY_LIST=localhost,127.0.0.1,internal.company.com

# Blocks a hardcoded list of dangerous domains by default, but can be set to false to disable
HTTP_SECURITY_CHECK=true

# Enables checks on provided file and folder paths to prevent path traversal attacks
PATH_TRAVERSAL_SAFETY=true
```

{% hint style="warning" %}
**Warning**: Disabling `CUSTOM_MCP_SECURITY_CHECK` allows arbitrary command execution and poses significant security risks in production environments.

`HTTP_SECURITY_CHECK` enables a built-in security feature that blocks a hardcoded list of dangerous domains. It is `true` by default and can be disabled by setting it to `false`.

`HTTP_DENY_LIST` allows you to specify an additional, custom list of domains to block. This list is empty by default.

`PATH_TRAVERSAL_SAFETY` enables a built-in security feature to prevent path traversal attacks on file and folder paths. It is `true` by default and can be disabled by setting it to `false`.
{% endhint %}

## Examples of how to set environment variables

### NPM

You can set all these variables when running Flowise using npx. For example:

```
npx flowise start --PORT=3000 --DEBUG=true
```

### Docker

```
docker run -d -p 5678:5678 flowise \
 -e DATABASE_TYPE=postgresdb \
 -e DATABASE_PORT=<POSTGRES_PORT> \
 -e DATABASE_HOST=<POSTGRES_HOST> \
 -e DATABASE_NAME=<POSTGRES_DATABASE_NAME> \
 -e DATABASE_USER=<POSTGRES_USER> \
 -e DATABASE_PASSWORD=<POSTGRES_PASSWORD> \
```

### Docker Compose

You can set all these variables in the `.env` file inside `docker` folder. Refer to [.env.example](https://github.com/FlowiseAI/Flowise/blob/main/docker/.env.example) file.


# Rate Limit

Learn how to managing API requests in Flowise

***

When you share your chatflow to public with no API authorization through API or embedded chat, anybody can access the flow. To prevent spamming, you can set the rate limit on your chatflow.

<figure><img src="/files/YjDa6pZ99RKeGvoKDa9a" alt="" width="462"><figcaption></figcaption></figure>

* **Message Limit per Duration**: How many messages can be received in a specific duration. Ex: 20
* **Duration in Seconds**: The specified duration. Ex: 60
* **Limit Message**: What message to return when the limit is exceeded. Ex: Quota Exceeded

Using the example above, that means only 20 messages are allowed to be received in 60 seconds. The rate limitation is tracked by IP-address. If you have deployed Flowise on cloud service, you'll have to set `NUMBER_OF_PROXIES` env variable.

## Rate Limit Setup

When you are hosting Flowise on cloud such as AWS, GCP, Azure, etc, most likely there you are behind a proxy/load balancer. Therefore, the rate limit might not be able to work. More info can be found [here](https://github.com/express-rate-limit/express-rate-limit/wiki/Troubleshooting-Proxy-Issues).

To fix the issue:

1. **Set Environment Variable:** Create an environment variable named `NUMBER_OF_PROXIES` and set its value to `0` in your hosting environment.
2. **Restart your hosted Flowise instance:** This enables Flowise to apply changes of environment variables.
3. **Check IP Address:** To verify the IP address, access the following URL: `{{hosted_url}}/api/v1/ip`. You can do this either by entering the URL into your web browser or by making an API request.
4. **Compare IP Address** After making the request, compare the IP address returned to your current IP address. You can find your current IP address by visiting either of these websites:
   * <http://ip.nfriedly.com/>
   * <https://api.ipify.org/>
5. **Incorrect IP Address:** If the returned IP address does not match your current IP address, increase `NUMBER_OF_PROXIES` by 1 and restart your Flowise instance. Repeat this process until the IP address matches your own.


# Running Flowise behind company proxy

If you're running Flowise in an environment that requires a proxy, such as within an organizational network, you can configure Flowise to route all its backend requests through a proxy of your choice. This feature is powered by the `global-agent` package.

<https://github.com/gajus/global-agent>

## Configuration

There are 2 environment variables you will need to run Flowise behind a company proxy:

| Variable                   | Purpose                                                                          | Required |
| -------------------------- | -------------------------------------------------------------------------------- | -------- |
| `GLOBAL_AGENT_HTTP_PROXY`  | Where to proxy all server HTTP requests through                                  | Yes      |
| `GLOBAL_AGENT_HTTPS_PROXY` | Where to proxy all server HTTPS requests through                                 | No       |
| `GLOBAL_AGENT_NO_PROXY`    | A pattern of URLs that should be excluded from proxying. Eg. `*.foo.com,baz.com` | No       |

## Outbound Allow-list

For enterprise plan, you must allow several outbound connections for license checking. Please contact <support@flowiseai.com> for more information.


# SSO

{% hint style="info" %}
SSO is only available for Enterprise plan
{% endhint %}

Flowise supports [OIDC](https://openid.net/) that allows users to use *single sign*-on (*SSO*) to access application. Currently only the [Organization Admin](/using-flowise/workspaces#setting-up-admin-account) can configure the SSO configurations.

## Microsoft

1. In the Azure portal, search for Microsoft Entra ID:

<figure><img src="/files/JMutCIxp73umrR1Lsiz0" alt=""><figcaption></figcaption></figure>

2. From the left hand bar, click App Registrations, then New Registration:

<figure><img src="/files/Xdm5vLwGd9KQlXi9kRjo" alt=""><figcaption></figcaption></figure>

3. Enter an app name, and select Single Tenant:

<figure><img src="/files/NZgz0V9yX74gRQA5lobV" alt=""><figcaption></figcaption></figure>

4. After an app is created, note down the Application (client) ID and Directory (tenant) ID:

<figure><img src="/files/6KQJObOHrxZimYal8XL3" alt=""><figcaption></figcaption></figure>

5. On the left side bar, click Certificates & secrets -> New client secret -> Add:

<figure><img src="/files/aTmSEhgC28igsKOGBbgd" alt=""><figcaption></figcaption></figure>

6. After the secret has been created, copy the Value, <mark style="color:red;">not</mark> the Secret ID:

<figure><img src="/files/KspmxMUIr50u1aUHD44X" alt=""><figcaption></figcaption></figure>

7. On the left side bar, click Authentication -> Add a platform -> Web:

<figure><img src="/files/rkJW9HesnZT2sNIUmBOR" alt=""><figcaption></figcaption></figure>

8. Fill in the redirect URIs. This will need to be changed depending on how you are hosting it: `http[s]://[your-flowise-instance.com]/api/v1/azure/callback`:

<figure><img src="/files/DcA7ap8koso804WnaIA1" alt="" width="514"><figcaption></figcaption></figure>

9. You should be able to see the new Redirect URI created:

<figure><img src="/files/PxwK0aISYXFWVDSFz0nS" alt=""><figcaption></figcaption></figure>

10. Back to Flowise app, login as Organization Admin. Navigate to SSO Config from left side bar. Fill in the Azure Tenant ID and Client ID from Step 4, and Client Secret from Step 6. Click Test Configuration to see if the connection can be established successfully:

<figure><img src="/files/TmvWaxT0HJNNgZZz3cft" alt="" width="563"><figcaption></figcaption></figure>

11. Lastly, enable and save it:

<figure><img src="/files/xSMRliDRJwvfLswwXGN4" alt="" width="563"><figcaption></figcaption></figure>

12. Before users can sign in using SSO, they have to be invited first. Refer to [Inviting users for SSO sign in](#inviting-users-for-sso-sign-in) for step by step guide. Invited users must also be part of the Directory Users in Azure.

<figure><img src="/files/ZbiDm0MatmhafeTkdKBw" alt=""><figcaption></figcaption></figure>

## Google

To enable Sign In With Google on your website, you first need to set up your Google API client ID. To do so, complete the following steps:

1. Open the **Credentials** page of the [Google APIs console](https://console.developers.google.com/apis).
2. Click **Create credentials** > **OAuth client ID**

<figure><img src="/files/1ij4PEQ7o0HCxBYGySsQ" alt="" width="563"><figcaption></figcaption></figure>

3\. Select **Web Application**:

<figure><img src="/files/tIG5RLVC1yOt2xYo7RUs" alt="" width="504"><figcaption></figcaption></figure>

4\. Fill in the redirect URIs. This will need to be changed depending on how you are hosting it: `http[s]://[your-flowise-instance.com]/api/v1/google/callback`:

<figure><img src="/files/svEAm8bGn3Sfxw68USc9" alt="" width="563"><figcaption></figcaption></figure>

5\. After creating, grab the client ID and secret:

<figure><img src="/files/mezhIKzyYm7N2Tq2EBZf" alt=""><figcaption></figcaption></figure>

6\. Back to Flowise app, add the Client ID and secret. Test the connection and Save it.

<figure><img src="/files/14RuRMogIMqM1QR5UoZH" alt="" width="563"><figcaption></figcaption></figure>

## Auth0

1. Register an account on [Auth0](https://auth0.com/), then create a new Application

<figure><img src="/files/Cqhakt000vTUWLN9meSa" alt=""><figcaption></figcaption></figure>

2. Select **Regular Web Application**:

<figure><img src="/files/KLyq0eELC5WUXQxQI6W3" alt=""><figcaption></figcaption></figure>

3. Configure the fields such as Name, Description. Take notes of the **Domain**, **Client ID**, and **Client Secret**.

<figure><img src="/files/vdQMxt3pyrww8rq8t0k1" alt=""><figcaption></figcaption></figure>

4\. Fill in the Application URIs. This will need to be changed depending on how you are hosting it: `http[s]://[your-flowise-instance.com]/api/v1/auth0/callback`:

<figure><img src="/files/plE0m2JfG5WX3odOIhB4" alt=""><figcaption></figcaption></figure>

5. In the API’s tab, ensure that Auth0 Management API is enabled with the following permissions
   * read:users
   * read:client\_grants

<figure><img src="/files/B6hGrmXoht4dJCIOa7Nb" alt=""><figcaption></figcaption></figure>

6\. Back to Flowise App, fill in the Domain, Client ID and Secret. Test and Save the configuration.

<figure><img src="/files/lTzWjP28LH6Y7WabRXSg" alt="" width="563"><figcaption></figcaption></figure>

## Inviting users for SSO sign in

In order for new user to be able to login, you have to invite new users into Flowise application. This is essential to keep a record of the role/workspace of the invited user. Refer to [Invite Users](/using-flowise/workspaces#invite-user) section for env variables configuration.

Invited user will be receiving invitation link to login:

<figure><img src="/files/vhSKdKyshMfB18Ws3dnM" alt="" width="449"><figcaption></figcaption></figure>

Clicking the button will bring the invited user directly to Flowise SSO login screen:

<figure><img src="/files/iYt3TvFPVFoVrzuLnMc4" alt="" width="400"><figcaption></figcaption></figure>

Or navigate to Flowise app and Sign in with SSO:

<figure><img src="/files/B1AXaZ3pL8biR6ZkYbo7" alt="" width="437"><figcaption></figcaption></figure>


# Running Flowise using Queue

By default, Flowise runs in a NodeJS main thread. However, with large number of predictions, this does not scale well. Therefore there are 2 modes you can configure: `main` (default) and `queue`.

## Queue Mode

With the following environment variables, you can run Flowise in `queue` mode.

<table><thead><tr><th width="263">Variable</th><th>Description</th><th>Type</th><th>Default</th></tr></thead><tbody><tr><td>MODE</td><td>Mode to run Flowise</td><td>Enum String: <code>main</code>, <code>queue</code></td><td><code>main</code></td></tr><tr><td>WORKER_CONCURRENCY</td><td>How many jobs are allowed to be processed in parallel for a worker. If you have 1 worker, that means how many concurrent prediction tasks it can handle. More <a href="https://docs.bullmq.io/guide/workers/concurrency">info</a></td><td>Number</td><td>10000</td></tr><tr><td>QUEUE_NAME</td><td>The name of the message queue</td><td>String</td><td>flowise-queue</td></tr><tr><td>QUEUE_REDIS_EVENT_STREAM_MAX_LEN</td><td>Event stream is auto-trimmed so that its size does not grow too much. More <a href="https://docs.bullmq.io/guide/events">info</a></td><td>Number</td><td>10000</td></tr><tr><td>REDIS_URL</td><td>Redis URL</td><td>String</td><td></td></tr><tr><td>REDIS_HOST</td><td>Redis host</td><td>String</td><td>localhost</td></tr><tr><td>REDIS_PORT</td><td>Redis port</td><td>Number</td><td>6379</td></tr><tr><td>REDIS_USERNAME</td><td>Redis username (optional)</td><td>String</td><td></td></tr><tr><td>REDIS_PASSWORD</td><td>Redis password (optional)</td><td>String</td><td></td></tr><tr><td>REDIS_TLS</td><td>Redis TLS connection (optional) More <a href="https://redis.io/docs/latest/operate/oss_and_stack/management/security/encryption/">info</a></td><td>Boolean</td><td>false</td></tr><tr><td>REDIS_CERT</td><td>Redis self-signed certificate</td><td>String</td><td></td></tr><tr><td>REDIS_KEY</td><td>Redis self-signed certificate key file</td><td>String</td><td></td></tr><tr><td>REDIS_CA</td><td>Redis self-signed certificate CA file</td><td>String</td><td></td></tr></tbody></table>

In `queue` mode, the main server will be responsible for processing requests, sending jobs to message queue. Main server will not execute the job. One or multiple workers receive jobs from the queue, execute them and send the results back.

This allows for dynamic scaling: you can add workers to handle increased workloads or remove them during lighter periods.

Here's how it works:

1. The main server receive prediction or other requests from the web, adding them as jobs to the queue.
2. These job queues are essential lists of tasks waiting to be processed. Workers, which are essentially separate processes or threads, pick up these jobs and execute them.
3. Once the job is completed, the worker:
   * Write the results in the database.
   * Send an event to indicate the completion of the job.
4. Main server receive the event, and send the result back to UI.
5. Redis pub/sub is also used for streaming data back to UI.

<figure><img src="/files/jKrlhkkOgDU6jlNZzcmD" alt=""><figcaption></figcaption></figure>

## Flow Diagram

<figure><img src="/files/CsrFEdx5khzs2OMAaAk5" alt=""><figcaption></figcaption></figure>

#### 1. Request Entry Point

A prediction request hits the Express server and immediately checks if `MODE=QUEUE`. If true, it switches from direct execution to asynchronous queue processing.

#### 2. Job Creation & Dual Channels

The system creates two parallel paths:

* **Job Channel**: Request data becomes a Redis job via BullMQ, HTTP thread waits for completion
* **Stream Channel**: SSE connection established for real-time updates via Redis pub/sub

#### 3. Worker Processing

Independent worker processes poll Redis for jobs. When assigned:

* Reconstruct full execution context (DB, components, abort controllers)
* Execute workflow with node-by-node processing
* Publish real-time events (tokens, tools, progress) to Redis channels

#### 4. Real-time Communication

During execution:

* [**RedisEventPublisher**](https://github.com/FlowiseAI/Flowise/blob/main/packages/server/src/queue/RedisEventPublisher.ts) broadcasts events from worker to Redis
* [**RedisEventSubscriber**](https://github.com/FlowiseAI/Flowise/blob/main/packages/server/src/queue/RedisEventSubscriber.ts) forwards events from Redis to SSE clients
* [**SSEStreamer**](https://github.com/FlowiseAI/Flowise/blob/main/packages/server/src/utils/SSEStreamer.ts) delivers events to browser in real-time

#### 5. Completion & Response

Job finishes, result stored in Redis:

* HTTP thread unblocks, receives result
* SSE connection closes gracefully
* Resources cleaned up (abort controllers, connections)

## Local Setup

### Start Redis

Before starting main server and workers, Redis need to be running first. You can run Redis on a separate machine, but make sure that it's accessible by the server and worker instances.

For example, you can get Redis running on your Docker following this [guide](https://www.docker.com/blog/how-to-use-the-redis-docker-official-image/).

### Start Main Server

This is the same as you were to run Flowise by default, with the exceptions of configuring the environment variables mentioned above.

```bash
pnpm start
```

### Start Worker

Same as main server, environment variables above must be configured. We recommend just using the same `.env` file for both main and worker instances. The only difference is how to run the workers. Open another terminal and run:

```bash
pnpm run start-worker
```

{% hint style="warning" %}
Main server and worker need to share the same secret key. Refer to [Environment Variables](/configuration/environment-variables#for-credentials). For production, we recommend using Postgres as database for perfomance.
{% endhint %}

## Docker Setup

### Method 1: Pre-built Images (Recommended)

This method uses pre-built Docker images from Docker Hub, making it the fastest and most reliable deployment option.

**Step 1: Setup Environment**

Create a `.env` file in the `docker` directory:

```bash
# Basic Configuration
PORT=3000
WORKER_PORT=5566

# Queue Configuration (Required)
MODE=queue
QUEUE_NAME=flowise-queue
REDIS_URL=redis://redis:6379

# Optional Queue Settings
WORKER_CONCURRENCY=5
REMOVE_ON_AGE=24
REMOVE_ON_COUNT=1000
QUEUE_REDIS_EVENT_STREAM_MAX_LEN=1000
ENABLE_BULLMQ_DASHBOARD=false

# Database (Optional - defaults to SQLite)
DATABASE_PATH=/root/.flowise

# Storage
BLOB_STORAGE_PATH=/root/.flowise/storage

# Secret Keys
SECRETKEY_PATH=/root/.flowise

# Logging
LOG_PATH=/root/.flowise/logs
```

**Step 2: Deploy**

```bash
cd docker
docker compose -f docker-compose-queue-prebuilt.yml up -d
```

**Step 3: Verify Deployment**

```bash
# Check container status
docker compose -f docker-compose-queue-prebuilt.yml ps

# View logs
docker compose -f docker-compose-queue-prebuilt.yml logs -f flowise
docker compose -f docker-compose-queue-prebuilt.yml logs -f flowise-worker
```

### Method 2: Build from Source

This method builds Flowise from source code, useful for development or custom modifications.

**Step 1: Setup Environment**

Create the same `.env` file as in [Method 1](#method-1-pre-built-images-recommended).

**Step 2: Deploy**

```bash
cd docker
docker compose -f docker-compose-queue-source.yml up -d
```

**Step 3: Build Process**

The source build will:

* Build the main Flowise application from source
* Build the worker image from source
* Set up Redis and networking

**Step 4: Monitor Build**

```bash
# Watch build progress
docker compose -f docker-compose-queue-source.yml logs -f

# Check final status
docker compose -f docker-compose-queue-source.yml ps
```

### Health Checks

All compose files include health checks:

```bash
# Check main instance health
curl http://localhost:3000/api/v1/ping

# Check worker health
curl http://localhost:5566/healthz
```

## Queue Dashboard

Set `ENABLE_BULLMQ_DASHBOARD` to true will allow users to view all the jobs, status, result, data by navigating to `<your-flowise-url.com>/admin/queues`

<figure><img src="/files/T7OmTwvA3qX774ByHIIU" alt=""><figcaption></figcaption></figure>


# Running in Production

## Mode

When running in production, we highly recommend using [Queue](/configuration/running-flowise-using-queue) mode with the following settings:

* 2 main servers with load balancing, each starting from 4vCPU 8GB RAM
* 4 workers, each starting from 4vCPU 8GB RAM

You can configure auto scaling depending on the traffic and volume.

## Database

By default, Flowise will use SQLite as the database. However when running at scale, its recommended to use PostgresQL.

## Storage

Currently Flowise only supports [AWS S3](https://aws.amazon.com/s3/) with plan to support more blob storage providers. This will allow files and logs to be stored on S3, instead of local file path. Refer [Environment Variables](/configuration/environment-variables#for-storage)

## Encryption

Flowise uses an encryption key to encrypt/decrypt credentials you use such as OpenAI API keys. [AWS Secret Manager](https://aws.amazon.com/secrets-manager/) is recommended to be used in production for better security control and key rotation. Refer [Environment Variables](/configuration/environment-variables#for-credentials)

## Rate Limit

When deployed to cloud/on-prem, most likely the instances are behind a proxy/load balancer. The IP address of the request might be the IP of the load balancer/reverse proxy, making the rate limiter effectively a global one and blocking all requests once the limit is reached or `undefined`. Setting the correct `NUMBER_OF_PROXIES` can resolve the issue. Refer [Rate Limit](/configuration/rate-limit#rate-limit-setup)

## Load Testing

Artillery can be used to load testing your deployed Flowise application. Example script can be found [here](https://github.com/FlowiseAI/Flowise/blob/main/artillery-load-test.yml).

## Security

Refer to [Environment Variables](/configuration/environment-variables#security-configuration)


# Integrations

Learn about all available integrations / nodes in Flowise

***

In Flowise, nodes are referred to as integrations. Similar to LEGO, you can build a customized LLM ochestration flow, a chatbot, an agent with all the integrations available in Flowise.

### LangChain

* [Agents](/integrations/langchain/agents)
* [Cache](/integrations/langchain/cache)
* [Chains](/integrations/langchain/chains)
* [Chat Models](/integrations/langchain/chat-models)
* [Document Loaders](/integrations/langchain/document-loaders)
* [Embeddings](/integrations/langchain/embeddings)
* [LLMs](/integrations/langchain/llms)
* [Memory](/integrations/langchain/memory)
* [Moderation](/integrations/langchain/moderation)
* [Output Parsers](/integrations/langchain/output-parsers)
* [Prompts](/integrations/langchain/prompts)
* [Record Managers](/integrations/langchain/record-managers)
* [Retrievers](/integrations/langchain/retrievers)
* [Text Splitters](/integrations/langchain/text-splitters)
* [Tools](/integrations/langchain/tools)
* [Vector Stores](/integrations/langchain/vector-stores)

### LlamaIndex

* [Agents](/integrations/llamaindex/agents)
* [Chat Models](/integrations/llamaindex/chat-models)
* [Embeddings](/integrations/llamaindex/embeddings)
* [Engine](/integrations/llamaindex/engine)
* [Response Synthesizer](/integrations/llamaindex/response-synthesizer)
* [Tools](/integrations/llamaindex/tools)
* [Vector Stores](/integrations/llamaindex/vector-stores)

### Utilities

* [Custom JS Function](/integrations/utilities/custom-js-function)
* [Set/Get Variable](/integrations/utilities/set-get-variable)
* [If Else](/integrations/utilities/if-else)
* [Set Variable](https://github.com/FlowiseAI/FlowiseDocs/blob/main/en/integrations/broken-reference/README.md)
* [Sticky Note](/integrations/utilities/sticky-note)

### External Integrations

* [Zapier Zaps](/integrations/3rd-party-platform-integration/zapier-zaps)


# LangChain

Learn how Flowise integrates with the LangChain framework

***

[**LangChain**](https://www.langchain.com/) is a framework for developing applications powered by language models. It simplifies the process of creating generative AI application, connecting data sources, vectors, memories with LLMs.

Flowise complements LangChain by offering a visual interface. Here, nodes are organized into distinct sections, making it easier to build workflows.

### LangChain Sections:

* [Agents](/integrations/langchain/agents)
* [Cache](/integrations/langchain/cache)
* [Chains](/integrations/langchain/chains)
* [Chat Models](/integrations/langchain/chat-models)
* [Document Loaders](/integrations/langchain/document-loaders)
* [Embeddings](/integrations/langchain/embeddings)
* [LLMs](/integrations/langchain/llms)
* [Memory](/integrations/langchain/memory)
* [Moderation](/integrations/langchain/moderation)
* [Output Parsers](/integrations/langchain/output-parsers)
* [Prompts](/integrations/langchain/prompts)
* [Record Managers](/integrations/langchain/record-managers)
* [Retrievers](/integrations/langchain/retrievers)
* [Text Splitters](/integrations/langchain/text-splitters)
* [Tools](/integrations/langchain/tools)
* [Vector Stores](/integrations/langchain/vector-stores)


# Agents

LangChain Agent Nodes

***

By themselves, language models can't take actions - they just output text.

Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.

### Agent Nodes:

* [Airtable Agent](/integrations/langchain/agents/airtable-agent)
* [AutoGPT](/integrations/langchain/agents/autogpt)
* [BabyAGI](/integrations/langchain/agents/babyagi)
* [CSV Agent](/integrations/langchain/agents/csv-agent)
* [Conversational Agent](/integrations/langchain/agents/conversational-agent)
* [Conversational Retrieval Agent](/integrations/langchain/agents/conversational-retrieval-agent)
* [MistralAI Tool Agent](/integrations/langchain/agents/mistralai-tool-agent)
* [OpenAI Assistant](/integrations/langchain/agents/openai-assistant)
* [OpenAI Function Agent](/integrations/langchain/agents/openai-function-agent)
* [OpenAI Tool Agent](/integrations/llamaindex/agents/openai-tool-agent)
* [ReAct Agent Chat](/integrations/langchain/agents/react-agent-chat)
* [ReAct Agent LLM](/integrations/langchain/agents/react-agent-llm)
* [Tool Agent](/integrations/langchain/agents/tool-agent)
* [XML Agent](/integrations/langchain/agents/xml-agent)


# Airtable Agent

Agent used to to answer queries on Airtable table.

<figure><img src="/files/gsQv5OaYwt66RFHGZW3u" alt="" width="271"><figcaption><p>Airtable Agent Node</p></figcaption></figure>

## Airtable Agent Functionality

The Airtable Agent is designed to facilitate interactions between Flowise AI and Airtable tables, enabling users to query Airtable data in a conversational manner. By using this agent, users can ask questions about the contents of their Airtable base and receive relevant responses based on the stored data. This can be particularly useful for quickly extracting specific pieces of information, automating workflows, or generating summaries from the data stored in Airtable.

For example, the Airtable Agent can be used to answer questions like:

* "How many tasks are still incomplete in my project tracker table?"
* "What are the contact details of the clients listed in the CRM?"
* "Give me a summary of all records added in the past week."

This functionality helps users get insights from their Airtable bases without needing to navigate through the Airtable interface, making it easier to manage and analyze their data in a seamless, interactive way.

## Inputs

The Airtable Agent requires the following inputs to function effectively:

* **Language Model**: The language model to be used for processing queries. This input is required and helps determine the quality and accuracy of responses provided by the agent.
* **Input Moderation**: Optional input that enables content moderation. This helps ensure that queries are appropriate and do not contain offensive or harmful content.
* **Connect Credential**: Required input to connect to Airtable. Users must select the appropriate credential that has permissions to access their Airtable data.
* **Base ID**: The ID of the Airtable base to connect to. This is a required field and can be found in the Airtable API documentation or the base settings. If your table URL looks like `https://airtable.com/app11RobdGoX0YNsC/tblJdmvbrgizbYlCO/viw9UrP77idOCE4ee`, `app11RobdGoX0YNsC` is the Base ID. It is used to specify which Airtable base contains the data to be queried.
* **Table ID**: The ID of the specific table within the Airtable base. This is also a required field and helps the agent target the correct table for data retrieval. In the example URL `https://airtable.com/app11RobdGoX0YNsC/tblJdmvbrgizbYlCO/viw9UrP77idOCE4ee`, `tblJdmvbrgizbYlCO` is the Table ID.
* **Additional Parameters**: Optional parameters that can be used to customize the behavior of the agent. These parameters can be configured based on specific use cases.
  * **Return All**: This option allows users to return all records from the specified table. If enabled, all records will be retrieved, otherwise, only a limited number will be returned.
  * **Limit**: Specifies the maximum number of records to be returned if **Return All** is not enabled. The default value is `100`.

**Note**: This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.


# AutoGPT

Autonomous agent with chain of thoughts for self-guided task completion.

<figure><img src="/files/1glUqxCWvLWmwGwRSaP1" alt="" width="277"><figcaption><p>AutoGPT Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# BabyAGI

Task Driven Autonomous Agent which creates new task and reprioritizes task list based on objective

<figure><img src="/files/yJMk3U25lxypxgoDpv9O" alt="" width="275"><figcaption><p>BabyAGI Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# CSV Agent

Agent used to answer queries on CSV data.

<figure><img src="/files/1VTBqOmKLf8nNdf32m87" alt="" width="273"><figcaption><p>CSV Agent Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# Conversational Agent

Conversational agent for a chat model. It will utilize chat specific prompts.

<figure><img src="/files/4JjaPwKHD4S2dNlHQq8a" alt="" width="271"><figcaption><p>Conversational Agent Node</p></figcaption></figure>

## Set Up the Conversational Agent

## Prerequisites:

* Set up Flowise.
* Download and install Docker.
* Download and install the Ollama language model locally on your machine.
  * <https://ollama.com/>
* Download and install Redis for AI.
  * <https://redis.io/redis-for-ai/>

## Context:

Unlike standard large language models (LLMs), which provide general-purpose models for performing language-based tasks, conversational agents are more sophisticated as they are designed specifically for managing conversations effectively.

You can use Flowise conversational agent to create a comprehensive and interactive conversation experience.

## Steps:

1. Access the Chatflows menu.
   1. Open your browser and go to <http://localhost:3000>.
   2. In Flowise, click **Chatflows**.
2. Create a new chatflow:
   1. Click **Add New**.
   2. Enter a name for your chatflow and click **Save**.
3. Add a conversational agent node:
   1. Click **Add Node**.
   2. Search for the conversational agent.
   3. Drag and drop the conversational agent node into your chatflow workspace.
4. Add a SearchAPI node. This node enables the agent to fetch data from Google search results:
   1. Click **Add Node**.
   2. Search for the SearchAPI node. It displays in the **Tools** section of the search results.
   3. Drag and drop the SearchAPI node into your chatflow workspace.
   4. Create a free SearchAPI account and retrieve your SearchAPI API key. The SearchAPI node needs this key to authenticate and perform search queries.
   5. On the SearchAPI node, click **Connect Credentials > Create New**.
   6. Enter a name for your credentials e.g. SearchAPI Credentials, copy and paste your SearchAPI API key into the SearchAPI API Key field and click **Add**.
   7. Connect the SearchAPI node to the Conversational Agent node by drawing a line from the Output section of the SearchAPI node to the Allowed Tools Inputs section of the Conversational Agent node.
   8. Click **Save Chatflow** to save your progress.
5. Add a ChatOllama chat model node. This node enables the agent to use Ollama language models to generate responses:
   1. Click **Add Node**.
   2. Search for the ChatOllama node. It displays in the **Chat Models** section of the search results.
   3. Drag and drop the ChatOllama node into your chatflow workspace.
   4. On the **Model Name** field, enter the model you’d like to use. We recommend llama3.2.
   5. On the **Temperature** field, set a temperature value between 0 and 1. The temperature parameter controls the randomness of the model's responses. Low temperature produces deterministic and focused responses. High temperature produces creative and varied responses. We recommend a temperature value of 0.5.
   6. Connect the ChatOllama node to the Conversational Agent node by drawing a line from the Output section of the ChatOllama node to the Chat Model Inputs section of the Conversational Agent node.
   7. Click **Save Chatflow** to save your progress.
6. Add a Redis chat memory node. This node enables the agent to remember previous interactions and store them in the chat history, enhancing the overall user experience:
   1. Click **Add Node**.
   2. Search for the Redis-Backed Chat Memory node. It displays in the **Memory** section of the search results.
   3. Drag and drop the Redis-Backed Chat Memory node into your chatflow workspace.
   4. On the Redis-Backed Chat Memory node, click **Connect Credentials > Create New**.
   5. Enter either your Redis API username and password or your Redis credential name and URL and click **Add**.
   6. Connect the Redis-Backed Chat Memory node to the Conversational Agent node by drawing a line from the Output section of the Redis node to the Memory Inputs section of the Conversational Agent node.
   7. Click **Save Chatflow** to save your progress.

## Result:

By following these steps, you will have successfully created a conversational agent that you can chat with and ask questions.

## Next Steps:

Click the chat icon to interact with your newly created conversational agent. If you are running Redis locally, ensure that your Docker container for Redis is running before you start the chat.

## Related Links and Troubleshooting:

For additional information and troubleshooting, see <https://redis.io/tutorials/howtos/solutions/flowise/conversational-agent/>.


# Conversational Retrieval Agent

Deprecating Node.

<figure><img src="/files/yUKlu3KCqo8H8x93bAlz" alt="" width="256"><figcaption></figcaption></figure>


# MistralAI Tool Agent

Deprecating Node.

<figure><img src="/files/yUKlu3KCqo8H8x93bAlz" alt="" width="256"><figcaption></figcaption></figure>


# OpenAI Assistant

An agent that uses OpenAI Assistant API to pick the tool and args to call.

<figure><img src="/files/GurzV2tdcqCfmIiuljMn" alt="" width="272"><figcaption><p>OpenAI Assistant</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](/contributing) to get started.
{% endhint %}


# Threads

[Threads](https://platform.openai.com/docs/assistants/how-it-works/managing-threads-and-messages) is only used when an OpenAI Assistant is being used. It is a conversation session between an Assistant and a user. Threads store messages and automatically handle truncation to fit content into a model’s context.

<figure><img src="/files/TeN5EShMOsyQC0xTMPkE" alt=""><figcaption></figcaption></figure>

## Separate conversations for multiple users

### UI & Embedded Chat

By default, UI and Embedded Chat will automatically separate threads for multiple users conversations. This is done by generating a unique **`chatId`** for each new interaction. That logic is handled under the hood by Flowise.

### Prediction API

POST /`api/v1/prediction/{your-chatflowid}`, specify the **`chatId`** . Same thread will be used for the same chatId.

```json
{
    "question": "hello!",
    "chatId": "user1"
}
```

### Message API

* GET `/api/v1/chatmessage/{your-chatflowid}`
* DELETE `/api/v1/chatmessage/{your-chatflowid}`

You can also filter via **`chatId` -** `/api/v1/chatmessage/{your-chatflowid}?chatId={your-chatid}`

All conversations can be visualized and managed from UI as well:

<figure><img src="/files/sIfVb0GgryhgnPF9Sjch" alt=""><figcaption></figcaption></figure>


# OpenAI Function Agent

Deprecating Node.

<figure><img src="/files/yUKlu3KCqo8H8x93bAlz" alt="" width="256"><figcaption></figcaption></figure>


# OpenAI Tool Agent

Deprecating Node.

<figure><img src="/files/yUKlu3KCqo8H8x93bAlz" alt="" width="256"><figcaption></figcaption></figure>


# ReAct Agent Chat

Agent that uses the [ReAct](https://react-lm.github.io/) (Reasoning and Acting) logic to decide what action to take, optimized to be used with Chat Models.

<figure><img src="/files/uB5IO6nHUrTwGeEYKHaP" alt="" width="325"><figcaption></figcaption></figure>

<figure><img src="/files/X34UgwjcerYFemq7Nuzt" alt="" width="336"><figcaption><p>ReAct Agent Chat Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# ReAct Agent LLM

Agent that uses the [ReAct](https://react-lm.github.io/) (Reasoning and Acting) logic to decide what action to take, optimized to be used with Non Chat Models.

<figure><img src="/files/uB5IO6nHUrTwGeEYKHaP" alt="" width="325"><figcaption></figcaption></figure>

<figure><img src="/files/6V70ZkZksJ9Pq1o7NCQ4" alt="" width="335"><figcaption><p>ReAct Agent LLM Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# Tool Agent

Agent that uses Function Calling to pick the tools and args to call.

<figure><img src="/files/AxoronbTNYQwybt7BJev" alt="" width="337"><figcaption><p>Tool Agent Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# XML Agent

Agent that is designed for LLMs that are good for reasoning/writing XML (e.g: Anthropic Claude).

<figure><img src="/files/098xHfclACEKLGDQHmHt" alt="" width="335"><figcaption><p>XML Agent Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# Cache

LangChain Cache Nodes

***

Caching can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. It can speed up your application by reducing the number of API calls you make to the LLM provider.

### Cache Nodes:

* [InMemory Cache](/integrations/langchain/cache/in-memory-cache)
* [InMemory Embedding Cache](/integrations/langchain/cache/inmemory-embedding-cache)
* [Momento Cache](/integrations/langchain/cache/momento-cache)
* [Redis Cache](/integrations/langchain/cache/redis-cache)
* [Redis Embeddings Cache](/integrations/langchain/cache/redis-embeddings-cache)
* [Upstash Redis Cache](/integrations/langchain/cache/upstash-redis-cache)


# InMemory Cache

Caches LLM response in local memory, will be cleared when app is restarted.

<figure><img src="/files/cJ7wke6brbSvTyzSD7UT" alt="" width="344"><figcaption><p>InMemory Cache Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# InMemory Embedding Cache

Cache generated Embeddings in memory to avoid needing to recompute them.

<figure><img src="/files/66qSpt0KmOIGaJ101ayV" alt="" width="340"><figcaption><p>InMemory Embedding Cache Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# Momento Cache

Cache LLM response using Momento, a distributed, serverless cache.

<figure><img src="/files/nq6KWWVAIxByYgkvVSkl" alt="" width="331"><figcaption><p>Momento Cache Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# Redis Cache

Cache LLM response in Redis, useful for sharing cache across multiple processes or servers.

<figure><img src="/files/LbLOqxc3ZZeRRTAKRtRm" alt="" width="331"><figcaption><p>Redis Cache Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# Redis Embeddings Cache

Cache LLM response in Redis, useful for sharing cache across multiple processes or servers.

<figure><img src="/files/Qeb3jUHA5G2i38o5vbC0" alt="" width="280"><figcaption><p>Redis Embeddings Cache Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# Upstash Redis Cache

Cache LLM response in Upstash Redis, serverless data for Redis and Kafka.

<figure><img src="/files/CPRe3XTi8YRcagpdElL5" alt="" width="328"><figcaption><p>Upstash Redis Cache Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# Chains

LangChain Chain Nodes

***

In the context of chatbots and large language models, "chains" typically refer to sequences of text or conversation turns. These chains are used to store and manage the conversation history and context for the chatbot or language model. Chains help the model understand the ongoing conversation and provide coherent and contextually relevant responses.

Here's how chains work:

1. **Conversation History**: When a user interacts with a chatbot or language model, the conversation is often represented as a series of text messages or conversation turns. Each message from the user and the model is stored in chronological order to maintain the context of the conversation.
2. **Input and Output**: Each chain consists of both user input and model output. The user's input is usually referred to as the "input chain," while the model's responses are stored in the "output chain." This allows the model to refer back to previous messages in the conversation.
3. **Contextual Understanding**: By preserving the entire conversation history in these chains, the model can understand the context and refer to earlier messages to provide coherent and contextually relevant responses. This is crucial for maintaining a natural and meaningful conversation with users.
4. **Maximum Length**: Chains have a maximum length to manage memory usage and computational resources. When a chain becomes too long, older messages may be removed or truncated to make room for new messages. This can potentially lead to loss of context if important conversation details are removed.
5. **Continuation of Conversation**: In a real-time chatbot or language model interaction, the input chain is continually updated with the user's new messages, and the output chain is updated with the model's responses. This allows the model to keep track of the ongoing conversation and respond appropriately.

Chains are a fundamental concept in building and maintaining chatbot and language model conversations. They ensure that the model has access to the context it needs to generate meaningful and context-aware responses, making the interaction more engaging and useful for users.

### Chain Nodes:

* [GET API Chain](/integrations/langchain/chains/get-api-chain)
* [OpenAPI Chain](/integrations/langchain/chains/openapi-chain)
* [POST API Chain](/integrations/langchain/chains/post-api-chain)
* [Conversation Chain](/integrations/langchain/chains/conversation-chain)
* [Conversational Retrieval QA Chain](/integrations/langchain/chains/conversational-retrieval-qa-chain)
* [LLM Chain](/integrations/langchain/chains/llm-chain)
* [Multi Prompt Chain](/integrations/langchain/chains/multi-prompt-chain)
* [Multi Retrieval QA Chain](/integrations/langchain/chains/multi-retrieval-qa-chain)
* [Retrieval QA Chain](/integrations/langchain/chains/retrieval-qa-chain)
* [Sql Database Chain](/integrations/langchain/chains/sql-database-chain)
* [Vectara QA Chain](/integrations/langchain/chains/vectara-chain)
* [VectorDB QA Chain](/integrations/langchain/chains/vectordb-qa-chain)


# GET API Chain

Chain to run queries against GET API.

<figure><img src="/files/tqweBNJNAnSJ0cKuPxws" alt="" width="337"><figcaption><p>GET API Chain Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# OpenAPI Chain

Chain that automatically select and call APIs based only on an OpenAPI spec.

<figure><img src="/files/4B76uJ84EWJ3cjq2HHdX" alt="" width="335"><figcaption><p>OpenAPI Chain Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# POST API Chain

Chain to run queries against POST API.

<figure><img src="/files/vNsEqtTIqCPnQSi8FQqx" alt="" width="337"><figcaption><p>POST API Chain Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# Conversation Chain

Chat models specific conversational chain with memory.

<figure><img src="/files/99Sd6oJkrXHfkLqKFwHw" alt="" width="332"><figcaption><p>Conversation Chain Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# Conversational Retrieval QA Chain

A chain for performing question-answering tasks with a retrieval component.

<figure><img src="/files/oO8BfLtTAINe1yqUD83L" alt=""><figcaption></figcaption></figure>

## Definitions

**A retrieval-based question-answering chain**, which integrates with a retrieval component and allows you to configure input parameters and perform question-answering tasks.\
**Retrieval-Based Chatbots:** Retrieval-based chatbots are chatbots that generate responses by selecting pre-defined responses from a database or a set of possible responses. They "retrieve" the most appropriate response based on the input from the user.\
**QA (Question Answering):** QA systems are designed to answer questions posed in natural language. They typically involve understanding the question and searching for or generating an appropriate answer.

## Inputs

* [Language Model](/integrations/langchain/chat-models)
* [Vector Store Retriever](/integrations/langchain/vector-stores)
* [Memory (optional)](/integrations/langchain/memory)

## Parameters

| Name                    | Description                                                                                                                                               |
| ----------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Return Source Documents | To return citations/sources that were used to build up the response                                                                                       |
| System Message          | An instruction for LLM on how to answer query                                                                                                             |
| Chain Option            | Method on how to summarize, answer questions, and extract information from documents. Read [more](https://js.langchain.com/docs/modules/chains/document/) |

## Outputs

| Name                           | Description                   |
| ------------------------------ | ----------------------------- |
| ConversationalRetrievalQAChain | Final node to return response |


# LLM Chain

Chain to run queries against LLMs.

<figure><img src="/files/qCRWjcWzk31lsl181Pyi" alt="" width="341"><figcaption><p>LLM Chain Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# Multi Prompt Chain

Chain automatically picks an appropriate prompt from multiple prompt templates.

<figure><img src="/files/q3jPmBLgk77Xv5eEAhvz" alt="" width="334"><figcaption><p>Multi Prompt Chain Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# Multi Retrieval QA Chain

QA Chain that automatically picks an appropriate vector store from multiple retrievers.

<figure><img src="/files/qzrxmiENdFrZkKqNkoKs" alt="" width="333"><figcaption><p>Multi Retrieval QA Chain Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}


# Retrieval QA Chain

QA chain to answer a question based on the retrieved documents.

<figure><img src="/files/LFzz6Ry3PrCLYGXkW2wT" alt="" width="337"><figcaption><p>Retrieval QA Chain Node</p></figcaption></figure>

{% hint style="info" %}
This section is a work in progress. We appreciate any help you can provide in completing this section. Please check our [Contribution Guide](broken://pages/G48tdmpQ3z4CTWEspqkA) to get started.
{% endhint %}




---

[Next Page](/llms-full.txt/1)

