1. Download Ollama or run it on Docker.

  2. For example, you can use the following command to spin up a Docker instance with llama3

    docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
    docker exec -it ollama ollama run llama3


  1. Chat Models > drag ChatOllama node

  1. Fill in the model that is running on Ollama. For example: llama2. You can also use additional parameters:

  1. Voila 🎉, you can now use ChatOllama node in Flowise


If you are running both Flowise and Ollama on docker. You'll have to change the Base URL for ChatOllama.

For Windows and MacOS Operating Systems specify http://host.docker.internal:8000. For Linux based systems the default docker gateway should be used since host.docker.internal is not available:


Last updated