FlowiseAI
English
English
  • Introduction
  • Get Started
  • Contribution Guide
    • Building Node
  • API Reference
    • Assistants
    • Attachments
    • Chat Message
    • Chatflows
    • Document Store
    • Feedback
    • Leads
    • Ping
    • Prediction
    • Tools
    • Upsert History
    • Variables
    • Vector Upsert
  • Using Flowise
    • Agentflow V2
    • Agentflow V1 (Deprecating)
      • Multi-Agents
      • Sequential Agents
        • Video Tutorials
    • API
    • Analytic
      • Arize
      • Langfuse
      • Lunary
      • Opik
      • Phoenix
    • Document Stores
    • Embed
    • Monitoring
    • Streaming
    • Uploads
    • Variables
    • Workspaces
    • Evaluations
  • Configuration
    • Auth
      • App Level
      • Chatflow Level
    • Databases
    • Deployment
      • AWS
      • Azure
      • Alibaba Cloud
      • Digital Ocean
      • Elestio
      • GCP
      • Hugging Face
      • Kubernetes using Helm
      • Railway
      • Render
      • Replit
      • RepoCloud
      • Sealos
      • Zeabur
    • Environment Variables
    • Rate Limit
    • Running Flowise behind company proxy
    • SSO
    • Running Flowise using Queue
    • Running in Production
  • Integrations
    • LangChain
      • Agents
        • Airtable Agent
        • AutoGPT
        • BabyAGI
        • CSV Agent
        • Conversational Agent
        • Conversational Retrieval Agent
        • MistralAI Tool Agent
        • OpenAI Assistant
          • Threads
        • OpenAI Function Agent
        • OpenAI Tool Agent
        • ReAct Agent Chat
        • ReAct Agent LLM
        • Tool Agent
        • XML Agent
      • Cache
        • InMemory Cache
        • InMemory Embedding Cache
        • Momento Cache
        • Redis Cache
        • Redis Embeddings Cache
        • Upstash Redis Cache
      • Chains
        • GET API Chain
        • OpenAPI Chain
        • POST API Chain
        • Conversation Chain
        • Conversational Retrieval QA Chain
        • LLM Chain
        • Multi Prompt Chain
        • Multi Retrieval QA Chain
        • Retrieval QA Chain
        • Sql Database Chain
        • Vectara QA Chain
        • VectorDB QA Chain
      • Chat Models
        • AWS ChatBedrock
        • Azure ChatOpenAI
        • NVIDIA NIM
        • ChatAnthropic
        • ChatCohere
        • Chat Fireworks
        • ChatGoogleGenerativeAI
        • Google VertexAI
        • ChatHuggingFace
        • ChatLocalAI
        • ChatMistralAI
        • IBM Watsonx
        • ChatOllama
        • ChatOpenAI
        • ChatTogetherAI
        • GroqChat
      • Document Loaders
        • API Loader
        • Airtable
        • Apify Website Content Crawler
        • Cheerio Web Scraper
        • Confluence
        • Csv File
        • Custom Document Loader
        • Document Store
        • Docx File
        • File Loader
        • Figma
        • FireCrawl
        • Folder with Files
        • GitBook
        • Github
        • Json File
        • Json Lines File
        • Notion Database
        • Notion Folder
        • Notion Page
        • PDF Files
        • Plain Text
        • Playwright Web Scraper
        • Puppeteer Web Scraper
        • S3 File Loader
        • SearchApi For Web Search
        • SerpApi For Web Search
        • Spider Web Scraper/Crawler
        • Text File
        • Unstructured File Loader
        • Unstructured Folder Loader
        • VectorStore To Document
      • Embeddings
        • AWS Bedrock Embeddings
        • Azure OpenAI Embeddings
        • Cohere Embeddings
        • Google GenerativeAI Embeddings
        • Google VertexAI Embeddings
        • HuggingFace Inference Embeddings
        • LocalAI Embeddings
        • MistralAI Embeddings
        • Ollama Embeddings
        • OpenAI Embeddings
        • OpenAI Embeddings Custom
        • TogetherAI Embedding
        • VoyageAI Embeddings
      • LLMs
        • AWS Bedrock
        • Azure OpenAI
        • Cohere
        • GoogleVertex AI
        • HuggingFace Inference
        • Ollama
        • OpenAI
        • Replicate
      • Memory
        • Buffer Memory
        • Buffer Window Memory
        • Conversation Summary Memory
        • Conversation Summary Buffer Memory
        • DynamoDB Chat Memory
        • MongoDB Atlas Chat Memory
        • Redis-Backed Chat Memory
        • Upstash Redis-Backed Chat Memory
        • Zep Memory
      • Moderation
        • OpenAI Moderation
        • Simple Prompt Moderation
      • Output Parsers
        • CSV Output Parser
        • Custom List Output Parser
        • Structured Output Parser
        • Advanced Structured Output Parser
      • Prompts
        • Chat Prompt Template
        • Few Shot Prompt Template
        • Prompt Template
      • Record Managers
      • Retrievers
        • Extract Metadata Retriever
        • Custom Retriever
        • Cohere Rerank Retriever
        • Embeddings Filter Retriever
        • HyDE Retriever
        • LLM Filter Retriever
        • Multi Query Retriever
        • Prompt Retriever
        • Reciprocal Rank Fusion Retriever
        • Similarity Score Threshold Retriever
        • Vector Store Retriever
        • Voyage AI Rerank Retriever
      • Text Splitters
        • Character Text Splitter
        • Code Text Splitter
        • Html-To-Markdown Text Splitter
        • Markdown Text Splitter
        • Recursive Character Text Splitter
        • Token Text Splitter
      • Tools
        • BraveSearch API
        • Calculator
        • Chain Tool
        • Chatflow Tool
        • Custom Tool
        • Exa Search
        • Google Custom Search
        • OpenAPI Toolkit
        • Code Interpreter by E2B
        • Read File
        • Request Get
        • Request Post
        • Retriever Tool
        • SearchApi
        • SearXNG
        • Serp API
        • Serper
        • Tavily
        • Web Browser
        • Write File
      • Vector Stores
        • AstraDB
        • Chroma
        • Couchbase
        • Elastic
        • Faiss
        • In-Memory Vector Store
        • Milvus
        • MongoDB Atlas
        • OpenSearch
        • Pinecone
        • Postgres
        • Qdrant
        • Redis
        • SingleStore
        • Supabase
        • Upstash Vector
        • Vectara
        • Weaviate
        • Zep Collection - Open Source
        • Zep Collection - Cloud
    • LiteLLM Proxy
    • LlamaIndex
      • Agents
        • OpenAI Tool Agent
        • Anthropic Tool Agent
      • Chat Models
        • AzureChatOpenAI
        • ChatAnthropic
        • ChatMistral
        • ChatOllama
        • ChatOpenAI
        • ChatTogetherAI
        • ChatGroq
      • Embeddings
        • Azure OpenAI Embeddings
        • OpenAI Embedding
      • Engine
        • Query Engine
        • Simple Chat Engine
        • Context Chat Engine
        • Sub-Question Query Engine
      • Response Synthesizer
        • Refine
        • Compact And Refine
        • Simple Response Builder
        • Tree Summarize
      • Tools
        • Query Engine Tool
      • Vector Stores
        • Pinecone
        • SimpleStore
    • Utilities
      • Custom JS Function
      • Set/Get Variable
      • If Else
      • Sticky Note
    • External Integrations
      • Zapier Zaps
  • Migration Guide
    • Cloud Migration
    • v1.3.0 Migration Guide
    • v1.4.3 Migration Guide
    • v2.1.4 Migration Guide
  • Use Cases
    • Calling Children Flows
    • Calling Webhook
    • Interacting with API
    • Multiple Documents QnA
    • SQL QnA
    • Upserting Data
    • Web Scrape QnA
  • Flowise
    • Flowise GitHub
    • Flowise Cloud
Powered by GitBook
On this page
Edit on GitHub
  1. Using Flowise

API

Learn more about the details of some of the most used APIs: prediction, vector-upsert

PreviousVideo TutorialsNextAnalytic

Last updated 4 months ago

Refer to for full list of public APIs

Prediction

Using Python/TS Library

Flowise provides 2 libraries:

from flowise import Flowise, PredictionData

def test_non_streaming():
    client = Flowise()

    # Test non-streaming prediction
    completion = client.create_prediction(
        PredictionData(
            chatflowId="<chatflow-id>",
            question="What is the capital of France?",
            streaming=False
        )
    )

    # Process and print the response
    for response in completion:
        print("Non-streaming response:", response)

def test_streaming():
    client = Flowise()

    # Test streaming prediction
    completion = client.create_prediction(
        PredictionData(
            chatflowId="<chatflow-id>",
            question="Tell me a joke!",
            streaming=True
        )
    )

    # Process and print each streamed chunk
    print("Streaming response:")
    for chunk in completion:
        print(chunk)


if __name__ == "__main__":
    # Run non-streaming test
    test_non_streaming()

    # Run streaming test
    test_streaming()
import { FlowiseClient } from 'flowise-sdk'

async function test_streaming() {
  const client = new FlowiseClient({ baseUrl: 'http://localhost:3000' });

  try {
    // For streaming prediction
    const prediction = await client.createPrediction({
      chatflowId: 'fe1145fa-1b2b-45b7-b2ba-bcc5aaeb5ffd',
      question: 'What is the revenue of Apple?',
      streaming: true,
    });

    for await (const chunk of prediction) {
        console.log(chunk);
    }
    
  } catch (error) {
    console.error('Error:', error);
  }
}

async function test_non_streaming() {
    const client = new FlowiseClient({ baseUrl: 'http://localhost:3000' });
  
    try {
      // For streaming prediction
      const prediction = await client.createPrediction({
        chatflowId: 'fe1145fa-1b2b-45b7-b2ba-bcc5aaeb5ffd',
        question: 'What is the revenue of Apple?',
      });
  
      console.log(prediction);
      
    } catch (error) {
      console.error('Error:', error);
    }
}

// Run non-streaming test
test_non_streaming()

// Run streaming test
test_streaming()

Override Config

Override existing input configuration of the chatflow with overrideConfig property.

Due to security reason, override config is disabled by default. User has to enable this by going into Chatflow Configuration -> Security tab. Then select the property that can be overriden.

import requests
API_URL = "http://localhost:3000/api/v1/prediction/<chatflowId>"

def query(payload):
    response = requests.post(API_URL, json=payload)
    return response.json()
    
output = query({
    "question": "Hey, how are you?",
    "overrideConfig": {
        "sessionId": "123",
        "returnSourceDocuments": true
    }
})
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/prediction/<chatflowId>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            },
            body: JSON.stringify(data)
        }
    );
    const result = await response.json();
    return result;
}

query({
    "question": "Hey, how are you?",
    "overrideConfig": {
        "sessionId": "123",
        "returnSourceDocuments": true
    }
}).then((response) => {
    console.log(response);
});

History

You can prepend history messages to give some context to LLM. For example, if you want the LLM to remember user's name:

import requests
API_URL = "http://localhost:3000/api/v1/prediction/<chatflowId>"

def query(payload):
    response = requests.post(API_URL, json=payload)
    return response.json()
    
output = query({
    "question": "Hey, how are you?",
    "history": [
        {
            "role": "apiMessage",
            "content": "Hello how can I help?"
        },
        {
            "role": "userMessage",
            "content": "Hi my name is Brian"
        },
        {
            "role": "apiMessage",
            "content": "Hi Brian, how can I help?"
        },
    ]
})
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/prediction/<chatflowId>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            },
            body: JSON.stringify(data)
        }
    );
    const result = await response.json();
    return result;
}

query({
    "question": "Hey, how are you?",
    "history": [
        {
            "role": "apiMessage",
            "content": "Hello how can I help?"
        },
        {
            "role": "userMessage",
            "content": "Hi my name is Brian"
        },
        {
            "role": "apiMessage",
            "content": "Hi Brian, how can I help?"
        },
    ]
}).then((response) => {
    console.log(response);
});

Persists Memory

You can pass a sessionId to persists the state of the conversation, so the every subsequent API calls will have context about previous conversation. Otherwise, a new session will be generated each time.

import requests
API_URL = "http://localhost:3000/api/v1/prediction/<chatflowId>"

def query(payload):
    response = requests.post(API_URL, json=payload)
    return response.json()
    
output = query({
    "question": "Hey, how are you?",
    "overrideConfig": {
        "sessionId": "123"
    } 
})
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/prediction/<chatflowId>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            },
            body: JSON.stringify(data)
        }
    );
    const result = await response.json();
    return result;
}

query({
    "question": "Hey, how are you?",
    "overrideConfig": {
        "sessionId": "123"
    }
}).then((response) => {
    console.log(response);
});

Variables

import requests
API_URL = "http://localhost:3000/api/v1/prediction/<chatflowId>"

def query(payload):
    response = requests.post(API_URL, json=payload)
    return response.json()
    
output = query({
    "question": "Hey, how are you?",
    "overrideConfig": {
        "vars": {
            "foo": "bar"
        }
    }
})
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/prediction/<chatflowId>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            },
            body: JSON.stringify(data)
        }
    );
    const result = await response.json();
    return result;
}

query({
    "question": "Hey, how are you?",
    "overrideConfig": {
        "vars": {
            "foo": "bar"
        }
    }
}).then((response) => {
    console.log(response);
});

Image Uploads

When Allow Image Upload is enabled, images can be uploaded from chat interface.

import requests
API_URL = "http://localhost:3000/api/v1/prediction/<chatflowId>"

def query(payload):
    response = requests.post(API_URL, json=payload)
    return response.json()
    
output = query({
    "question": "Can you describe the image?",
    "uploads": [
        {
            "data": 'data:image/png;base64,iVBORw0KGgdM2uN0', # base64 string or url
            "type": 'file', # file | url
            "name": 'Flowise.png',
            "mime": 'image/png'
        }
    ]
})
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/prediction/<chatflowId>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            },
            body: JSON.stringify(data)
        }
    );
    const result = await response.json();
    return result;
}

query({
    "question": "Can you describe the image?",
    "uploads": [
        {
            "data": 'data:image/png;base64,iVBORw0KGgdM2uN0', //base64 string or url
            "type": 'file', //file | url
            "name": 'Flowise.png',
            "mime": 'image/png'
        }
    ]
}).then((response) => {
    console.log(response);
});

Speech to Text

When Speech to Text is enabled, users can speak directly into microphone and speech will be transcribed into text.

import requests
API_URL = "http://localhost:3000/api/v1/prediction/<chatflowId>"

def query(payload):
    response = requests.post(API_URL, json=payload)
    return response.json()
    
output = query({
    "uploads": [
        {
            "data": 'data:audio/webm;codecs=opus;base64,GkXf', #base64 string
            "type": 'audio',
            "name": 'audio.wav',
            "mime": 'audio/webm'
        }
    ]
})
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/prediction/<chatflowId>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            },
            body: JSON.stringify(data)
        }
    );
    const result = await response.json();
    return result;
}

query({
    "uploads": [
        {
            "data": 'data:audio/webm;codecs=opus;base64,GkXf', //base64 string
            "type": 'audio',
            "name": 'audio.wav',
            "mime": 'audio/webm'
        }
    ]
}).then((response) => {
    console.log(response);
});

Vector Upsert API

Document Loaders with File Upload

Some document loaders in Flowise allow user to upload files:

Make sure the sent file type is compatible with the expected file type from document loader. For example, if a PDF File Loader is being used, you should only send .pdf files.

import requests

API_URL = "http://localhost:3000/api/v1/vector/upsert/<chatflowId>"

# use form data to upload files
form_data = {
    "files": ('state_of_the_union.txt', open('state_of_the_union.txt', 'rb'))
}

body_data = {
    "returnSourceDocuments": True
}

def query(form_data):
    response = requests.post(API_URL, files=form_data, data=body_data)
    print(response)
    return response.json()

output = query(form_data)
print(output)
// use FormData to upload files
let formData = new FormData();
formData.append("files", input.files[0]);
formData.append("returnSourceDocuments", true);

async function query(formData) {
    const response = await fetch(
        "http://localhost:3000/api/v1/vector/upsert/<chatflowId>",
        {
            method: "POST",
            body: formData
        }
    );
    const result = await response.json();
    return result;
}

query(formData).then((response) => {
    console.log(response);
});

Document Loaders without Upload

import requests

API_URL = "http://localhost:3000/api/v1/vector/upsert/<chatflowId>"

def query(payload):
    response = requests.post(API_URL, json=payload)
    print(response)
    return response.json()

output = query({
    "overrideConfig": { # optional
        "returnSourceDocuments": true
    }
})
print(output)
async function query(data) {
    const response = await fetch(
        "http://localhost:3000/api/v1/vector/upsert/<chatflowId>",
        {
            method: "POST",
            headers: {
                "Content-Type": "application/json"
            },
            body: JSON.stringify(data)
        }
    );
    const result = await response.json();
    return result;
}

query({
    "overrideConfig": { // optional
        "returnSourceDocuments": true
    }
}).then((response) => {
    console.log(response);
});

Document Upsert/Refresh API

Video Tutorials

Those video tutorials cover the main use cases for implementing the Flowise API.

: pip install flowise

: npm install flowise-sdk

Pass variables in the API to be used by the nodes in the flow. See more:

If the flow contains with Upload File functionality, the API looks slightly different. Instead of passing body as JSON, form data is being used. This allows you to send files to the API.

To avoid having separate loaders for different file types, we recommend to use

For other nodes without Upload File functionality, the API body is in JSON format similar to .

Refer to section for more information about how to use the API.

Python
Typescript
CSV File
Docx File
Json File
Json Lines File
PDF File
Text File
Unstructured File
Document Loaders
File Loader
Variables
Document Loaders
Prediction API
API Reference
  • Prediction
  • POSTCreate a new prediction
  • Using Python/TS Library
  • Override Config
  • History
  • Persists Memory
  • Variables
  • Image Uploads
  • Speech to Text
  • Vector Upsert API
  • POSTUpsert vector embeddings
  • Document Loaders with File Upload
  • Document Loaders without Upload
  • Document Upsert/Refresh API
  • POSTUpsert new document to document store
  • POSTRe-process and upsert all documents in document store
  • Video Tutorials

Create a new prediction

post

Create a new prediction

Authorizations
Path parameters
idstringRequired

Chatflow ID

Body
questionstringOptional

The question being asked

overrideConfigobjectOptional

The configuration to override the default prediction settings (optional)

Responses
200
Prediction created successfully
application/json
400
Invalid input provided
404
Chatflow not found
422
Validation error
500
Internal server error
post
POST /prediction/{id} HTTP/1.1
Host: 
Authorization: Bearer JWT
Content-Type: application/json
Accept: */*
Content-Length: 278

{
  "question": "text",
  "overrideConfig": {},
  "history": [
    {
      "role": "apiMessage",
      "content": "Hello, how can I help you?"
    }
  ],
  "uploads": [
    {
      "type": "file",
      "name": "image.png",
      "data": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAABjElEQVRIS+2Vv0oDQRDG",
      "mime": "image/png"
    }
  ]
}
{
  "text": "text",
  "json": {},
  "question": "text",
  "chatId": "text",
  "chatMessageId": "text",
  "sessionId": "text",
  "memoryType": "text",
  "sourceDocuments": [
    {
      "pageContent": "This is the content of the page.",
      "metadata": {
        "author": "John Doe",
        "date": "2024-08-24"
      }
    }
  ],
  "usedTools": [
    {
      "tool": "Name of the tool",
      "toolInput": {
        "input": "search query"
      },
      "toolOutput": "text"
    }
  ],
  "fileAnnotations": [
    {
      "filePath": "path/to/file",
      "fileName": "file.txt"
    }
  ]
}

Upsert vector embeddings

post

Upsert vector embeddings of documents in a chatflow

Authorizations
Path parameters
idstringRequired

Chatflow ID

Body
stopNodeIdstringOptional

In cases when you have multiple vector store nodes, you can specify the node ID to store the vectors

Example: node_1
overrideConfigobjectOptional

The configuration to override the default vector upsert settings (optional)

Responses
200
Vector embeddings upserted successfully
application/json
400
Invalid input provided
404
Chatflow not found
422
Validation error
500
Internal server error
post
POST /vector/upsert/{id} HTTP/1.1
Host: 
Authorization: Bearer JWT
Content-Type: application/json
Accept: */*
Content-Length: 43

{
  "stopNodeId": "node_1",
  "overrideConfig": {}
}
{
  "numAdded": 1,
  "numDeleted": 1,
  "numUpdated": 1,
  "numSkipped": 1,
  "addedDocs": [
    {
      "pageContent": "This is the content of the page.",
      "metadata": {
        "author": "John Doe",
        "date": "2024-08-24"
      }
    }
  ]
}

Upsert new document to document store

post

Upsert new document to document store

Authorizations
Path parameters
idstring · uuidRequired

Document store ID

Body
docIdstring · uuidOptional

Document ID within the store. If provided, existing configuration from the document will be used for the new document

Responses
200
Successfully execute upsert operation
application/json
400
Invalid request body
500
Internal server error
post
POST /document-store/upsert/{id} HTTP/1.1
Host: 
Authorization: Bearer JWT
Content-Type: application/json
Accept: */*
Content-Length: 311

{
  "docId": "123e4567-e89b-12d3-a456-426614174000",
  "loader": {
    "name": "plainText",
    "config": {}
  },
  "splitter": {
    "name": "recursiveCharacterTextSplitter",
    "config": {}
  },
  "embedding": {
    "name": "openAIEmbeddings",
    "config": {}
  },
  "vectorStore": {
    "name": "faiss",
    "config": {}
  },
  "recordManager": {
    "name": "postgresRecordManager",
    "config": {}
  }
}
{
  "numAdded": 1,
  "numDeleted": 1,
  "numUpdated": 1,
  "numSkipped": 1,
  "addedDocs": [
    {
      "pageContent": "This is the content of the page.",
      "metadata": {
        "author": "John Doe",
        "date": "2024-08-24"
      }
    }
  ]
}

Re-process and upsert all documents in document store

post

Re-process and upsert all existing documents in document store

Authorizations
Path parameters
idstring · uuidRequired

Document store ID

Body
Responses
200
Successfully execute refresh operation
application/json
400
Invalid request body
500
Internal server error
post
POST /document-store/refresh/{id} HTTP/1.1
Host: 
Authorization: Bearer JWT
Content-Type: application/json
Accept: */*
Content-Length: 323

{
  "items": [
    {
      "docId": "123e4567-e89b-12d3-a456-426614174000",
      "loader": {
        "name": "plainText",
        "config": {}
      },
      "splitter": {
        "name": "recursiveCharacterTextSplitter",
        "config": {}
      },
      "embedding": {
        "name": "openAIEmbeddings",
        "config": {}
      },
      "vectorStore": {
        "name": "faiss",
        "config": {}
      },
      "recordManager": {
        "name": "postgresRecordManager",
        "config": {}
      }
    }
  ]
}

No content

Document Stores