Perform a basic vector search:
const searchResponse = await client.search("What was Uber's profit in 2020?");
query
string
required
The search query.
vector_search_settings
VectorSearchSettings | Record<string, any>
default:"None"
Optional settings for vector search, either a dictionary, a VectorSearchSettings object, or None may be passed. If a dictionary or None is passed, then R2R will use server-side defaults for non-specified fields.
kg_search_settings
Optional[Union[KGSearchSettings, dict]]
default:"None"
Optional settings for knowledge graph search, either a dictionary, a KGSearchSettings object, or None may be passed. If a dictionary or None is passed, then R2R will use server-side defaults for non-specified fields.

Search custom settings

Search with custom settings, such as bespoke document filters and larger search limits
# returns only chunks from documents with title `document_title`
const filtered_search_response = client.search(
    "What was Uber's profit in 2020?",
    { search_filters: {
        $eq: "uber_2021.pdf"
    },
    search_limit: 100
    },
)
Combine traditional keyword-based search with vector search:
const hybrid_search_response = client.search(
    "What was Uber's profit in 2020?",
    { search_filters: {
        $eq: "uber_2021.pdf"
    },
    search_limit: 100
    },
)
hybrid_search_response = client.search(
    "What was Uber's profit in 2020?",
    { use_hybrid_search: true}
)
Utilize knowledge graph capabilities to enhance search results:
const kg_search_response = client.search(
    "What is a fierce nerd?",
    { use_kg_search: true },
)

Retrieval-Augmented Generation (RAG)

Basic RAG

Generate a response using RAG:
const ragResponse = await client.rag("What was Uber's profit in 2020?");
query
str
required
The query for RAG.
vector_search_settings
Optional[Union[VectorSearchSettings, dict]]
default:"None"
Optional settings for vector search, either a dictionary, a VectorSearchSettings object, or None may be passed. If a dictionary is used, non-specified fields will use the server-side default.
kg_search_settings
Optional[Union[KGSearchSettings, dict]]
default:"None"
Optional settings for knowledge graph search, either a dictionary, a KGSearchSettings object, or None may be passed. If a dictionary or None is passed, then R2R will use server-side defaults for non-specified fields.
rag_generation_config
Optional[Union[GenerationConfig, dict]]
default:"None"
Optional configuration for LLM to use during RAG generation, including model selection and parameters. Will default to values specified in r2r.toml.
task_prompt_override
Optional[str]
default:"None"
Optional custom prompt to override the default task prompt.
include_title_if_available
Optional[bool]
default:"True"
Augment document chunks with their respective document titles?

RAG with custom search settings

Use hybrid search in RAG:
const hybridRagResponse = await client.rag({
  query: "Who is Jon Snow?",
  use_hybrid_search: true
});

RAG with custom completion LLM

Use a different LLM model for RAG:
const customLLMRagResponse = await client.rag({
  query: "What is R2R?",
  rag_generation_config: {
    model: "anthropic/claude-3-opus-20240229"
  }
});

Streaming RAG

Stream RAG responses for real-time applications:
const streamResponse = await client.rag({
  query: "Who was Aristotle?",
  rag_generation_config: { stream: true }
});

if (streamResponse instanceof ReadableStream) {
  const reader = streamResponse.getReader();
  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    console.log(new TextDecoder().decode(value));
  }
}

Advanced RAG Techniques

R2R supports advanced Retrieval-Augmented Generation (RAG) techniques that can be easily configured at runtime. These techniques include Hypothetical Document Embeddings (HyDE) and RAG-Fusion, which can significantly enhance the quality and relevance of retrieved information. To use an advanced RAG technique, you can specify the search_strategy parameter in your vector search settings:
const client = new R2RClient();

// Using HyDE
const hydeResponse = await client.rag(
    "What are the main themes in Shakespeare's plays?",
    {
        vector_search_settings: {
            search_strategy: "hyde",
            search_limit: 10
        }
    }
);

// Using RAG-Fusion
const ragFusionResponse = await client.rag(
    "Explain the theory of relativity",
    {
        vector_search_settings: {
            search_strategy: "rag_fusion",
            search_limit: 20
        }
    }
);
For a comprehensive guide on implementing and optimizing advanced RAG techniques in R2R, including HyDE and RAG-Fusion, please refer to our Advanced RAG Cookbook.

Customizing RAG

Putting everything together for highly custom RAG functionality:
const customRagResponse = await client.rag({
  query: "Who was Aristotle?",
  use_vector_search: true,
  use_hybrid_search: true,
  use_kg_search: true,
  kg_generation_config: {},
  rag_generation_config: {
    model: "anthropic/claude-3-haiku-20240307",
    temperature: 0.7,
    stream: true
  }
});

Agents

Multi-turn agentic RAG

The R2R application includes agents which come equipped with a search tool, enabling them to perform RAG. Using the R2R Agent for multi-turn conversations:
const messages = [
  { role: "user", content: "What was Aristotle's main contribution to philosophy?" },
  { role: "assistant", content: "Aristotle made numerous significant contributions to philosophy, but one of his main contributions was in the field of logic and reasoning. He developed a system of formal logic, which is considered the first comprehensive system of its kind in Western philosophy. This system, often referred to as Aristotelian logic or term logic, provided a framework for deductive reasoning and laid the groundwork for scientific thinking." },
  { role: "user", content: "Can you elaborate on how this influenced later thinkers?" }
];

const agentResponse = await client.agent({
  messages,
  use_vector_search: true,
  use_hybrid_search: true
});
Note that any of the customization seen in AI powered search and RAG documentation above can be applied here.
messages
Array<Message>
required
The array of messages to pass to the RAG agent.
use_vector_search
boolean
default:true
Whether to use vector search.
search_filters
object
Optional filters for the search.
search_limit
number
default:10
The maximum number of search results to return.
use_hybrid_search
boolean
default:false
Whether to perform a hybrid search (combining vector and keyword search).
use_kg_search
boolean
default:false
Whether to use knowledge graph search.
generation_config
object
Optional configuration for knowledge graph search generation.
rag_generation_config
GenerationConfig
Optional configuration for RAG generation, including model selection and parameters.
task_prompt_override
string
Optional custom prompt to override the default task prompt.
include_title_if_available
boolean
default:true
Whether to include document titles in the context if available.

Multi-turn agentic RAG with streaming

The response from the RAG agent may be streamed directly back:
const messages = [
  { role: "user", content: "What was Aristotle's main contribution to philosophy?" },
  { role: "assistant", content: "Aristotle made numerous significant contributions to philosophy, but one of his main contributions was in the field of logic and reasoning. He developed a system of formal logic, which is considered the first comprehensive system of its kind in Western philosophy. This system, often referred to as Aristotelian logic or term logic, provided a framework for deductive reasoning and laid the groundwork for scientific thinking." },
  { role: "user", content: "Can you elaborate on how this influenced later thinkers?" }
];

const agentResponse = await client.agent({
  messages,
  use_vector_search: true,
  use_hybrid_search: true,
  rag_generation_config: { stream: true }
});

if (agentResponse instanceof ReadableStream) {
  const reader = agentResponse.getReader();
  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    console.log(new TextDecoder().decode(value));
  }
}