# OpenAI Moderation

<figure><img src="https://823733684-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F00tYLwhz5RyR7fJEhrWy%2Fuploads%2Fgit-blob-625903f29451646220f5d37f043c4bf2a40d3fd0%2Fimage%20(3)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1).png?alt=media" alt="" width="302"><figcaption><p>OpenAI Moderation Node</p></figcaption></figure>

OpenAI provides [moderation API](https://platform.openai.com/docs/guides/moderation) to check whether text or images are potentially harmful. If harmful content is identified, users can specify an error message to be displayed.

<figure><img src="https://823733684-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F00tYLwhz5RyR7fJEhrWy%2Fuploads%2FENROOGaB62v052TZnfnr%2Fimage.png?alt=media&#x26;token=0a42893e-5c35-4dc9-8248-a59a317c3ae2" alt=""><figcaption></figcaption></figure>
