ChatLocalAI
Last updated
Last updated
is a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format.
To use ChatLocalAI within Flowise, follow the steps below:
For example:
Download one of the models from
In the /models
folder, you should be able to see the downloaded model in there:
Now API is accessible at localhost:8080
Drag and drop a new ChatLocalAI component to canvas:
Fill in the fields:
Model Name: The model you want to use. Note that it must be inside /models
folder of LocalAI directory. For instance: ggml-gpt4all-j.bin
Watch how you can use LocalAI on Flowise
Refer for list of supported models.
Base Path: The base url from LocalAI such as
If you are running both Flowise and LocalAI on Docker, you might need to change the base path to . For Linux based systems the default docker gateway should be used since host.docker.internal is not available:
That's it! For more information, refer to LocalAI .