# RAG

Large Language Models (LLMs) have unlocked the potential to create advanced Q\&A chatbots capable of delivering precise answers based on specific content. These systems rely on a method called Retrieval-Augmented Generation (RAG), which enhances their responses by grounding them in relevant source material.

In this tutorial, you’ll learn how to create a basic Q\&A application that can extract and answer questions from given document sources.

The process can be separated out into 2 sub-processes:

* Indexing
* Retrieval

## Indexing

[Document Stores](/using-flowise/document-stores.md) is designed to help with the whole indexing pipelines - retrieveing data from different sources, chunking strategy, upserting to vector database, syncing with updated data.

We support wide range of document loaders, ranging from files like Pdf, Word, Google Drive, to web scrapers like Playwright, Firecrawl, Apify and others. You can also create custom document loader.

<figure><img src="/files/y5cblUIpvriL7T1x4E6H" alt="" width="563"><figcaption></figcaption></figure>

## Retrieval

Based on the user's input, relevant document chunks are fetched from vector database. LLM then uses the retrieved context to generate a response.

1. Drag and drop an [Agent](https://docs.flowiseai.com/tutorials/pages/1h4GO6J25X5FiK7akrpm#id-3.-agent-node) node, and configure the model to use.

<figure><img src="/files/ZFHUYQ4bldabo6qJKsa5" alt="" width="391"><figcaption></figcaption></figure>

2. Add a new Knowledge (Document Store) and define what the content is about. This helps the LLM understand when and how to retrieve relevant information. You can also use the auto-generate button to assist with this process.

{% hint style="success" %}
Only upserted document store can be used
{% endhint %}

<figure><img src="/files/9UfNsTBVSn1fQCcS3I1K" alt="" width="482"><figcaption></figcaption></figure>

3. (Optional) If the data has already been stored in a vector database without going through the document store indexing pipeline, you can also connect directly to the vector database and embedding model.

<figure><img src="/files/hzOLQhyIDZ5wH5IPfSmF" alt="" width="388"><figcaption></figcaption></figure>

4. Add a system prompt, or use the **Generate** button to assist. We recommend using it, as it helps craft a more effective and optimized prompt.

<figure><img src="/files/uEUrUfKJRy2WU8bY3Ltt" alt="" width="482"><figcaption></figcaption></figure>

<figure><img src="/files/8kv9iUaysPpU8uOZhmzo" alt="" width="563"><figcaption></figcaption></figure>

5. Your RAG agent is now ready to use!

## Resources

{% embed url="<https://youtu.be/KHc0ClOIv0A?si=mEZJydM8bT2imKJY>" %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.flowiseai.com/tutorials/rag.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
