from flowise import Flowise, PredictionDatadeftest_non_streaming(): client =Flowise()# Test non-streaming prediction completion = client.create_prediction(PredictionData( chatflowId="<chatflow-id>", question="What is the capital of France?", streaming=False ) )# Process and print the responsefor response in completion:print("Non-streaming response:", response)deftest_streaming(): client =Flowise()# Test streaming prediction completion = client.create_prediction(PredictionData( chatflowId="<chatflow-id>", question="Tell me a joke!", streaming=True ) )# Process and print each streamed chunkprint("Streaming response:")for chunk in completion:print(chunk)if__name__=="__main__":# Run non-streaming testtest_non_streaming()# Run streaming testtest_streaming()
import { FlowiseClient } from'flowise-sdk'asyncfunctiontest_streaming() {constclient=newFlowiseClient({ baseUrl:'http://localhost:3000' });try {// For streaming predictionconstprediction=awaitclient.createPrediction({ chatflowId:'fe1145fa-1b2b-45b7-b2ba-bcc5aaeb5ffd', question:'What is the revenue of Apple?', streaming:true, });forawait (constchunkof prediction) {console.log(chunk); } } catch (error) {console.error('Error:', error); }}asyncfunctiontest_non_streaming() {constclient=newFlowiseClient({ baseUrl:'http://localhost:3000' });try {// For streaming predictionconstprediction=awaitclient.createPrediction({ chatflowId:'fe1145fa-1b2b-45b7-b2ba-bcc5aaeb5ffd', question:'What is the revenue of Apple?', });console.log(prediction); } catch (error) {console.error('Error:', error); }}// Run non-streaming testtest_non_streaming()// Run streaming testtest_streaming()
Override Config
Override existing input configuration of the chatflow with overrideConfig property.
Due to security reason, override config is disabled by default. User has to enable this by going into Chatflow Configuration Security Tab. Then select the property that can be overriden.
import requestsAPI_URL ="http://localhost:3000/api/v1/prediction/<chatflowId>"defquery(payload): response = requests.post(API_URL, json=payload)return response.json()output =query({"question": "Hey, how are you?","overrideConfig": {"sessionId": "123","returnSourceDocuments": true }})
asyncfunctionquery(data) {constresponse=awaitfetch("http://localhost:3000/api/v1/prediction/<chatflowId>", { method:"POST", headers: {"Content-Type":"application/json" }, body:JSON.stringify(data) } );constresult=awaitresponse.json();return result;}query({"question":"Hey, how are you?","overrideConfig": {"sessionId":"123","returnSourceDocuments":true }}).then((response) => {console.log(response);});
History
You can prepend history messages to give some context to LLM. For example, if you want the LLM to remember user's name:
import requestsAPI_URL ="http://localhost:3000/api/v1/prediction/<chatflowId>"defquery(payload): response = requests.post(API_URL, json=payload)return response.json()output =query({"question": "Hey, how are you?","history": [ {"role": "apiMessage","content": "Hello how can I help?" }, {"role": "userMessage","content": "Hi my name is Brian" }, {"role": "apiMessage","content": "Hi Brian, how can I help?" }, ]})
asyncfunctionquery(data) {constresponse=awaitfetch("http://localhost:3000/api/v1/prediction/<chatflowId>", { method:"POST", headers: {"Content-Type":"application/json" }, body:JSON.stringify(data) } );constresult=awaitresponse.json();return result;}query({"question":"Hey, how are you?","history": [ {"role":"apiMessage","content":"Hello how can I help?" }, {"role":"userMessage","content":"Hi my name is Brian" }, {"role":"apiMessage","content":"Hi Brian, how can I help?" }, ]}).then((response) => {console.log(response);});
Persists Memory
You can pass a sessionId to persists the state of the conversation, so the every subsequent API calls will have context about previous conversation. Otherwise, a new session will be generated each time.
import requestsAPI_URL ="http://localhost:3000/api/v1/prediction/<chatflowId>"defquery(payload): response = requests.post(API_URL, json=payload)return response.json()output =query({"question": "Hey, how are you?","overrideConfig": {"sessionId": "123" } })
asyncfunctionquery(data) {constresponse=awaitfetch("http://localhost:3000/api/v1/prediction/<chatflowId>", { method:"POST", headers: {"Content-Type":"application/json" }, body:JSON.stringify(data) } );constresult=awaitresponse.json();return result;}query({"question":"Hey, how are you?","overrideConfig": {"sessionId":"123" }}).then((response) => {console.log(response);});
Variables
Pass variables in the API to be used by the nodes in the flow. See more: Variables
import requestsAPI_URL ="http://localhost:3000/api/v1/prediction/<chatflowId>"defquery(payload): response = requests.post(API_URL, json=payload)return response.json()output =query({"question": "Hey, how are you?","overrideConfig": {"vars": {"foo": "bar" } }})
asyncfunctionquery(data) {constresponse=awaitfetch("http://localhost:3000/api/v1/prediction/<chatflowId>", { method:"POST", headers: {"Content-Type":"application/json" }, body:JSON.stringify(data) } );constresult=awaitresponse.json();return result;}query({"question":"Hey, how are you?","overrideConfig": {"vars": {"foo":"bar" } }}).then((response) => {console.log(response);});
Image Uploads
When Allow Image Upload is enabled, images can be uploaded from chat interface.
If the flow contains Document Loaders with Upload File functionality, the API looks slightly different. Instead of passing body as JSON, form data is being used. This allows you to send files to the API.
Make sure the sent file type is compatible with the expected file type from document loader. For example, if a PDF File Loader is being used, you should only send .pdf files.
To avoid having separate loaders for different file types, we recommend to use File Loader
import requestsAPI_URL ="http://localhost:3000/api/v1/vector/upsert/<chatflowId>"# use form data to upload filesform_data ={"files": ('state_of_the_union.txt',open('state_of_the_union.txt', 'rb'))}body_data ={"returnSourceDocuments":True}defquery(form_data): response = requests.post(API_URL, files=form_data, data=body_data)print(response)return response.json()output =query(form_data)print(output)
// use FormData to upload fileslet formData =newFormData();formData.append("files",input.files[0]);formData.append("returnSourceDocuments",true);asyncfunctionquery(formData) {constresponse=awaitfetch("http://localhost:3000/api/v1/vector/upsert/<chatflowId>", { method:"POST", body: formData } );constresult=awaitresponse.json();return result;}query(formData).then((response) => {console.log(response);});
Document Loaders without Upload
For other Document Loaders nodes without Upload File functionality, the API body is in JSON format similar to Prediction API.
{"numAdded":1,"numDeleted":1,"numUpdated":1,"numSkipped":1,"addedDocs": [ {"pageContent":"This is the content of the page.","metadata": {"author":"John Doe","date":"2024-08-24" } } ]}
Create a new prediction
Create a new prediction
POST/prediction/{id}
Authorization
Path parameters
id*string
Chatflow ID
Body
questionstring
The question being asked
overrideConfigobject
The configuration to override the default prediction settings (optional)
historyarray of object
The history messages to be prepended (optional)
uploadsarray of object
Response
Prediction created successfully
Body
textstring
The result of the prediction
jsonobject
The result of the prediction in JSON format if available
questionstring
The question asked during the prediction process
chatIdstring
The chat ID associated with the prediction
chatMessageIdstring
The chat message ID associated with the prediction
{"text":"text","question":"text","chatId":"text","chatMessageId":"text","sessionId":"text","memoryType":"text","sourceDocuments": [ {"pageContent":"This is the content of the page.","metadata": {"author":"John Doe","date":"2024-08-24" } } ],"usedTools": [ {"tool":"Name of the tool","toolInput": {"input":"search query" },"toolOutput":"text" } ],"fileAnnotations": [ {"filePath":"path/to/file","fileName":"file.txt" } ]}