NVIDIA NIM

Local

Important Note on Running NIM with Flowise

If an existing NIM instance is already running (e.g., via NVIDIA’s ChatRTX), starting another instance through Flowise without checking for an existing endpoint may cause conflicts. This issue occurs when multiple podman run commands are executed on the same NIM, leading to failures.

For support, refer to:

Prerequisite

Flowise

  1. Chat Models > Drag the Chat NVIDIA NIM node > Click Setup NIM Locally.

  1. If NIM is already installed, click Next. Otherwise, click Download to start the installer.

  1. Select a model image to download.

  1. Once selected, click Next to proceed with the download.

  1. Downloading Image – Duration depends on internet speed.

  1. Learn more about Relax Memory Constraints. The Host Port is the port for the container to map to the local machine.

  1. Starting the container...

Note: If you already have a container running with the selected model, Flowise will ask you if you want to reuse the running container. You can choose to reuse the running container or start a new one with a different port.

  1. Save the chatflow

  2. 🎉 Voila! Your Chat NVIDIA NIM node is now ready to use in Flowise!

Cloud

Prerequisite

  1. Log in or sign up to NVIDIA.

  2. From the top navigation bar, click NIM:

  1. Search for the model you would like to use. To download it locally, we will be using Docker:

  1. Follow the instructions from the Docker setup. You must first get an API Key to pull the Docker image:

Flowise

  1. Chat Models > drag Chat NVIDIA NIM node

  1. If you are using NVIDIA hosted endpoint, you must have your API key. Connect Credential > click Create New. However if you are using local setup, this is optional.

  1. Put in the model name and voila 🎉, your Chat NVIDIA NIM node is now ready to be used in Flowise!

Resources

Last updated