Learn how to run a Retrieval-Augmented Generation system locally using R2R
r2r serve --docker --config-name=local_llm
.
R2R supports RAG with local LLMs through the Ollama library. You may follow the instructions on their official website to install Ollama outside of the R2R Docker.
local_llm
configuration. This can be customized to your needs by setting up a standalone project.
Local Configuration Details
local_llm
configuration file (core/configs/local_llm.toml
) includes:ollama
and the model mxbai-embed-large
to run embeddings. We have excluded media file parsers as they are not yet supported locally.