# ChatLocalAI

## LocalAI Setup

[**LocalAI** ](https://github.com/go-skynet/LocalAI)is a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format.

To use ChatLocalAI within Flowise, follow the steps below:

1. ```bash
   git clone https://github.com/go-skynet/LocalAI
   ```
2. <pre class="language-bash"><code class="lang-bash"><strong>cd LocalAI
   </strong></code></pre>
3. ```bash
   # copy your models to models/
   cp your-model.bin models/
   ```

For example:

Download one of the models from [gpt4all.io](https://gpt4all.io/index.html)

```bash
# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
```

In the `/models` folder, you should be able to see the downloaded model in there:

<figure><img src="https://823733684-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F00tYLwhz5RyR7fJEhrWy%2Fuploads%2Fgit-blob-b56c5e316bcb18c0959ef9968680f5854b5e7723%2Fimage%20(22)%20(1).png?alt=media" alt=""><figcaption></figcaption></figure>

Refer [here](https://localai.io/model-compatibility/index.html) for list of supported models.

4. ```bash
   docker compose up -d --pull always
   ```
5. Now API is accessible at localhost:8080

```bash
# Test API
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"ggml-gpt4all-j.bin","object":"model"}]}
```

## Flowise Setup

Drag and drop a new ChatLocalAI component to canvas:

<figure><img src="https://823733684-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F00tYLwhz5RyR7fJEhrWy%2Fuploads%2Fgit-blob-2400c5dc6ace4720dc844974e1018fa0eeed8a3b%2Fimage%20(39).png?alt=media" alt=""><figcaption></figcaption></figure>

Fill in the fields:

* **Base Path**: The base url from LocalAI such as <http://localhost:8080/v1>
* **Model Name**: The model you want to use. Note that it must be inside `/models` folder of LocalAI directory. For instance: `ggml-gpt4all-j.bin`

{% hint style="info" %}
If you are running both Flowise and LocalAI on Docker, you might need to change the base path to <http://host.docker.internal:8080/v1>. For Linux based systems the default docker gateway should be used since host.docker.internal is not available: <http://172.17.0.1:8080/v1>
{% endhint %}

That's it! For more information, refer to LocalAI [docs](https://localai.io/basics/getting_started/index.html).

Watch how you can use LocalAI on Flowise

{% embed url="<https://youtu.be/0B0oIs8NS9k>" %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.flowiseai.com/integrations/langchain/chat-models/chatlocalai.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
