LiteLLM Proxy
Learn how Flowise integrates with LiteLLM Proxy
Last updated
Learn how Flowise integrates with LiteLLM Proxy
Last updated
Use with Flowise to:
Load balance Azure OpenAI/LLM endpoints
Call 100+ LLMs in the OpenAI Format
Use Virtual Keys to set budgets, rate limits and track usage
LiteLLM Requires a config with all your models defined - we will call this file litellm_config.yaml
On success, the proxy will start running on http://localhost:4000/
In Flowise, specify the standard OpenAI nodes (not the Azure OpenAI nodes) -- this goes for chat models, embeddings, llms -- everything
Set BasePath
to LiteLLM Proxy URL (http://localhost:4000
when running locally)
Set the following headers Authorization: Bearer <your-litellm-master-key>