# ChatOpenAI

## Prerequisite

1. An [OpenAI](https://openai.com/) account
2. Create an [API key](https://platform.openai.com/api-keys)

## Setup

1. **Chat Models** > drag **ChatOpenAI** node

<figure><img src="/files/0rkobll8CZ00YGDC04K5" alt="" width="563"><figcaption></figcaption></figure>

2. **Connect Credential** > click **Create New**

<figure><img src="/files/LARfF2ycjPJFYzwzcg3i" alt="" width="278"><figcaption></figcaption></figure>

2. Fill in the **ChatOpenAI** credential

<figure><img src="/files/KFXdnahAkkoea00qkOtI" alt="" width="563"><figcaption></figcaption></figure>

4. Voila [🎉](https://emojipedia.org/party-popper/), you can now use **ChatOpenAI node** in Flowise

<figure><img src="/files/V6E0aEEfJ5hz5IhasgJu" alt=""><figcaption></figcaption></figure>

## Custom base URL and headers

Flowise supports using custom base URL and headers for Chat OpenAI. Users can easily use integrations like OpenRouter, TogetherAI and others that support OpenAI API compatibility.

### TogetherAI

1. Refer to official [docs](https://docs.together.ai/docs/openai-api-compatibility#nodejs) from TogetherAI
2. Create a new credential with TogetherAI API key
3. Click **Additional Parameters** on ChatOpenAI node.
4. Change the Base Path:

<figure><img src="/files/Ah72fmKXLs3YvjesNY1C" alt="" width="563"><figcaption></figcaption></figure>

### Open Router

1. Refer to official [docs](https://openrouter.ai/docs#quick-start) from OpenRouter
2. Create a new credential with OpenRouter API key
3. Click Additional Parameters on ChatOpenAI node
4. Change the Base Path and Base Options:

<figure><img src="/files/FT0QIwf7HvbD8kqn0z0i" alt="" width="563"><figcaption></figcaption></figure>

## Custom Model

For models that are not supported on ChatOpenAI node, you can use ChatOpenAI Custom for that. This allow users to fill in model name such as `mistralai/Mixtral-8x7B-Instruct-v0.1`

<figure><img src="/files/Lg9Mu12SJ4Gqm42aRgwa" alt=""><figcaption></figcaption></figure>

## Image Upload

You can also allow images to be uploaded and analyzed by LLM. Under the hood, Flowise will use [OpenAI Vison ](https://platform.openai.com/docs/guides/vision)model to process the image. Only works with LLMChain, Conversation Chain, ReAct Agent, and Conversational Agent.

<figure><img src="/files/IiSQQYVebmGTK66AzEww" alt="" width="332"><figcaption></figcaption></figure>

From the chat interface, you will now see a new image upload button:

<figure><img src="/files/CrTD6A7OJqFJSmtCYTPD" alt=""><figcaption></figcaption></figure>

<figure><img src="/files/7ou6yjX60YOeeX6dJGCv" alt=""><figcaption></figcaption></figure>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.flowiseai.com/integrations/langchain/chat-models/azure-chatopenai.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
