from flowise import Flowise, PredictionData
def test_non_streaming():
client = Flowise()
# Test non-streaming prediction
completion = client.create_prediction(
PredictionData(
chatflowId="<chatflow-id>",
question="What is the capital of France?",
streaming=False
)
)
# Process and print the response
for response in completion:
print("Non-streaming response:", response)
def test_streaming():
client = Flowise()
# Test streaming prediction
completion = client.create_prediction(
PredictionData(
chatflowId="<chatflow-id>",
question="Tell me a joke!",
streaming=True
)
)
# Process and print each streamed chunk
print("Streaming response:")
for chunk in completion:
print(chunk)
if __name__ == "__main__":
# Run non-streaming test
test_non_streaming()
# Run streaming test
test_streaming()
import { FlowiseClient } from 'flowise-sdk'
async function test_streaming() {
const client = new FlowiseClient({ baseUrl: 'http://localhost:3000' });
try {
// For streaming prediction
const prediction = await client.createPrediction({
chatflowId: 'fe1145fa-1b2b-45b7-b2ba-bcc5aaeb5ffd',
question: 'What is the revenue of Apple?',
streaming: true,
});
for await (const chunk of prediction) {
console.log(chunk);
}
} catch (error) {
console.error('Error:', error);
}
}
async function test_non_streaming() {
const client = new FlowiseClient({ baseUrl: 'http://localhost:3000' });
try {
// For streaming prediction
const prediction = await client.createPrediction({
chatflowId: 'fe1145fa-1b2b-45b7-b2ba-bcc5aaeb5ffd',
question: 'What is the revenue of Apple?',
});
console.log(prediction);
} catch (error) {
console.error('Error:', error);
}
}
// Run non-streaming test
test_non_streaming()
// Run streaming test
test_streaming()
Override Config
Override existing input configuration of the chatflow with overrideConfig property.
Due to security reason, override config is disabled by default. User has to enable this by going into Chatflow Configuration -> Security tab. Then select the property that can be overriden.
You can prepend history messages to give some context to LLM. For example, if you want the LLM to remember user's name:
import requests
API_URL = "http://localhost:3000/api/v1/prediction/<chatflowId>"
def query(payload):
response = requests.post(API_URL, json=payload)
return response.json()
output = query({
"question": "Hey, how are you?",
"history": [
{
"role": "apiMessage",
"content": "Hello how can I help?"
},
{
"role": "userMessage",
"content": "Hi my name is Brian"
},
{
"role": "apiMessage",
"content": "Hi Brian, how can I help?"
},
]
})
async function query(data) {
const response = await fetch(
"http://localhost:3000/api/v1/prediction/<chatflowId>",
{
method: "POST",
headers: {
"Content-Type": "application/json"
},
body: JSON.stringify(data)
}
);
const result = await response.json();
return result;
}
query({
"question": "Hey, how are you?",
"history": [
{
"role": "apiMessage",
"content": "Hello how can I help?"
},
{
"role": "userMessage",
"content": "Hi my name is Brian"
},
{
"role": "apiMessage",
"content": "Hi Brian, how can I help?"
},
]
}).then((response) => {
console.log(response);
});
Persists Memory
You can pass a sessionId to persists the state of the conversation, so the every subsequent API calls will have context about previous conversation. Otherwise, a new session will be generated each time.
If the flow contains Document Loaders with Upload File functionality, the API looks slightly different. Instead of passing body as JSON, form data is being used. This allows you to send files to the API.
Make sure the sent file type is compatible with the expected file type from document loader. For example, if a PDF File Loader is being used, you should only send .pdf files.
To avoid having separate loaders for different file types, we recommend to use File Loader
import requests
API_URL = "http://localhost:3000/api/v1/vector/upsert/<chatflowId>"
# use form data to upload files
form_data = {
"files": ('state_of_the_union.txt', open('state_of_the_union.txt', 'rb'))
}
body_data = {
"returnSourceDocuments": True
}
def query(form_data):
response = requests.post(API_URL, files=form_data, data=body_data)
print(response)
return response.json()
output = query(form_data)
print(output)
// use FormData to upload files
let formData = new FormData();
formData.append("files", input.files[0]);
formData.append("returnSourceDocuments", true);
async function query(formData) {
const response = await fetch(
"http://localhost:3000/api/v1/vector/upsert/<chatflowId>",
{
method: "POST",
body: formData
}
);
const result = await response.json();
return result;
}
query(formData).then((response) => {
console.log(response);
});
Document Loaders without Upload
For other Document Loaders nodes without Upload File functionality, the API body is in JSON format similar to Prediction API.
The configuration to override the default prediction settings (optional)
uploadsobject[]
historyobject[]
The history messages to be prepended (optional)
Responses
application/json
cURL
JavaScript
Python
HTTP
curl -L \
--request POST \
--url '/prediction/{id}' \
--header 'Authorization: Bearer JWT' \
--header 'Content-Type: application/json' \
--data '{"overrideConfig":{},"uploads":[{"type":"file","name":"image.png","data":"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAABjElEQVRIS+2Vv0oDQRDG","mime":"image/png"}],"history":[{"content":"Hello, how can I help you?","role":"apiMessage"}]}'