NVIDIA NIM
Last updated
Last updated
If an existing NIM instance is already running (e.g., via NVIDIA’s ChatRTX), starting another instance through Flowise without checking for an existing endpoint may cause conflicts. This issue occurs when multiple podman run
commands are executed on the same NIM, leading to failures.
For support, refer to:
NVIDIA Developer Forums – For technical issues and questions.
NVIDIA Developer Discord – For community engagement and announcements.
Setup NVIDIA NIM locally with WSL2.
Chat Models > Drag the Chat NVIDIA NIM node > Click Setup NIM Locally.
If NIM is already installed, click Next. Otherwise, click Download to start the installer.
Select a model image to download.
Once selected, click Next to proceed with the download.
Downloading Image – Duration depends on internet speed.
Learn more about Relax Memory Constraints. The Host Port is the port for the container to map to the local machine.
Starting the container...
Note: If you already have a container running with the selected model, Flowise will ask you if you want to reuse the running container. You can choose to reuse the running container or start a new one with a different port.
Save the chatflow
🎉 Voila! Your Chat NVIDIA NIM node is now ready to use in Flowise!
Log in or sign up to NVIDIA.
From the top navigation bar, click NIM:
Search for the model you would like to use. To download it locally, we will be using Docker:
Follow the instructions from the Docker setup. You must first get an API Key to pull the Docker image:
Chat Models > drag Chat NVIDIA NIM node
If you are using NVIDIA hosted endpoint, you must have your API key. Connect Credential > click Create New. However if you are using local setup, this is optional.
Put in the model name and voila 🎉, your Chat NVIDIA NIM node is now ready to be used in Flowise!