githubEdit

ChatOllama

Prerequisite

  1. For example, you can use the following command to spin up a Docker instance with llama3

    docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
    docker exec -it ollama ollama run llama3

Setup

  1. Chat Models > drag ChatOllama node

  1. Fill in the model that is running on Ollama. For example: llama2. You can also use additional parameters:

  1. Voila 🎉arrow-up-right, you can now use ChatOllama node in Flowise

Running on Docker

If you are running both Flowise and Ollama on docker. You'll have to change the Base URL for ChatOllama.

For Windows and MacOS Operating Systems specify http://host.docker.internal:8000arrow-up-right. For Linux based systems the default docker gateway should be used since host.docker.internal is not available: http://172.17.0.1:8000arrow-up-right

Ollama Cloud

  1. Create an API keyarrow-up-right on ollama.com.

  2. In Flowise, click Create Credential and select Ollama API, and enter your API Key.

  1. Then, set the Base URL to https://ollama.com

  2. Enter the models that are available on Ollama Cloud.

Resources

Last updated