LiteLLM Proxy
Learn how Flowise integrates with LiteLLM Proxy
Use LiteLLM Proxy with Flowise to:
Load balance Azure OpenAI/LLM endpoints
Call 100+ LLMs in the OpenAI Format
Use Virtual Keys to set budgets, rate limits and track usage
How to use LiteLLM Proxy with Flowise
Step 1: Define your LLM Models in the LiteLLM config.yaml file
LiteLLM Requires a config with all your models defined - we will call this file litellm_config.yaml
Detailed docs on how to setup litellm config - here
Step 2. Start litellm proxy
On success, the proxy will start running on http://localhost:4000/
Step 3: Use the LiteLLM Proxy in Flowise
In Flowise, specify the standard OpenAI nodes (not the Azure OpenAI nodes) -- this goes for chat models, embeddings, llms -- everything
Set
BasePath
to LiteLLM Proxy URL (http://localhost:4000
when running locally)Set the following headers
Authorization: Bearer <your-litellm-master-key>
Last updated