构建向量存储和检索器

构建向量存储和检索器 #

https://python.langchain.com/docs/tutorials/retrievers/

本教程将使您熟悉LangChain的向量存储和检索器抽象概念。这些抽象概念旨在支持从(向量)数据库和其他来源检索数据,以便与LLM工作流程集成。对于那些需要获取数据以作为模型推理一部分进行推理的应用来说,这些概念非常重要,就像检索增强生成(RAG)的情况一样(请参阅我们的RAG教程)。

概念 #

本教程聚焦于文本数据的检索。我们将涵盖以下概念:

  • 文档(Documents)
  • 向量存储(Vector stores)
  • 检索器(Retrievers)

环境准备 #

Jupyter Notebook #

这篇教程和其他一些教程一样在Jupyter Notebook中运行。关于如何安装的说明,请参见此处

Installation #

本教程需要安装 langchainlangchain-chromalangchain-openai 这些包:

1pip install langchain langchain-chroma langchain-openai

langchain-chroma

langchain-chroma属于LangChain Integerations中的Vector stores下的Chroma,文档地址https://python.langchain.com/docs/integrations/vectorstores/chroma/

Chroma是一个专注于开发者生产力和满意度的AI原生开源向量数据库。Chroma采用Apache 2.0许可证。您可以在此页面查看Chroma的完整文档,并在此页面找到LangChain集成的API文档

加载环境变量配置 #

OPENAI_API_KEY, OPENAI_BASE_URL, MODEL_NAME, EMBEDDING_MODEL_NAME.env文件中配置:

1pip install python-dotenv
1from dotenv import load_dotenv
2assert load_dotenv()
3
4import os
5MODEL_NAME = os.environ.get("MODEL_NAME")
6EMBEDDING_MODEL_NAME = os.environ.get("EMBEDDING_MODEL_NAME")

LangSmith跟踪配置(可选) #

略,参见这里

文档(Documents) #

LangChain实现了一个文档抽象,旨在表示一个文本单元及其相关元数据。它有两个属性:

  • page_content:表示内容的字符串;
  • metadata:包含任意元数据的字典。 metadata属性可以捕获有关文档来源、与其他文档的关系以及其他信息。请注意,单个Document对象通常代表较大文档的一个片段。 让我们生成一些示例文档:
 1from langchain_core.documents import Document
 2
 3documents = [
 4    Document(
 5        page_content="Dogs are great companions, known for their loyalty and friendliness.",
 6        metadata={"source": "mammal-pets-doc"},
 7    ),
 8    Document(
 9        page_content="Cats are independent pets that often enjoy their own space.",
10        metadata={"source": "mammal-pets-doc"},
11    ),
12    Document(
13        page_content="Goldfish are popular pets for beginners, requiring relatively simple care.",
14        metadata={"source": "fish-pets-doc"},
15    ),
16    Document(
17        page_content="Parrots are intelligent birds capable of mimicking human speech.",
18        metadata={"source": "bird-pets-doc"},
19    ),
20    Document(
21        page_content="Rabbits are social animals that need plenty of space to hop around.",
22        metadata={"source": "mammal-pets-doc"},
23    ),
24]

API Reference: Document

在这里,我们生成了五个文档,其中包含元数据,表明有三个不同的"来源"。

向量存储(Vector stores) #

向量搜索是存储和搜索非结构化数据(如非结构化文本)的常见方式。其理念是存储与文本相关联的数值向量。给定一个查询,我们可以将其嵌入(embed)为相同维度的向量,并使用向量相似性度量来识别存储中的相关数据。

LangChain的VectorStore对象包含向存储中添加文本和Document对象以及使用各种相似性度量进行查询的方法。它们通常使用嵌入模型(embedding models)进行初始化,这些模型决定了文本数据如何转换为数值向量。

LangChain包含一套与不同向量存储技术的集成(integrations)。一些向量存储由提供商(例如各种云提供商)托管,需要特定的凭证才能使用;一些(如Postgres)在可以本地运行或通过第三方运行的独立基础设施中运行;还有一些可以在内存中运行,适用于轻量级工作负载。这里我们将使用Chroma演示LangChain VectorStores的使用,Chroma包含一个内存实现。

要实例化一个向量存储,我们通常需要提供一个嵌入模型embedding model来指定如何将文本转换为数值向量。这里我们将使用OpenAI的嵌入(OpenAI embeddings)

1from langchain_chroma import Chroma
2from langchain_openai import OpenAIEmbeddings
3
4vectorstore = Chroma.from_documents(
5    documents,
6    embedding=OpenAIEmbeddings(),
7)

API Reference: OpenAIEmbeddings

在这里调用.from_documents将把文档添加到向量存储中。VectorStore实现了添加文档的方法,这些方法在对象实例化后也可以调用。大多数实现允许您连接到现有的向量存储,例如,通过提供客户端(client)、索引名称(index name)或其他信息。有关特定集成(integrations)的更多详细信息,请参阅文档。

一旦我们实例化了包含文档的VectorStore,就可以对其进行查询。VectorStore包括查询的方法:

这些方法的输出通常将包括Document对象的列表。

示例 #

根据与字符串查询的相似度返回文档:

1vectorstore.similarity_search("cat")
1[Document(metadata={'source': 'mammal-pets-doc'}, page_content='Cats are independent pets that often enjoy their own space.'),
2 Document(metadata={'source': 'mammal-pets-doc'}, page_content='Dogs are great companions, known for their loyalty and friendliness.'),
3 Document(metadata={'source': 'mammal-pets-doc'}, page_content='Rabbits are social animals that need plenty of space to hop around.'),
4 Document(metadata={'source': 'fish-pets-doc'}, page_content='Goldfish are popular pets for beginners, requiring relatively simple care.')]

异步查询:

1await vectorstore.asimilarity_search("cat")
1[Document(metadata={'source': 'mammal-pets-doc'}, page_content='Cats are independent pets that often enjoy their own space.'),
2 Document(metadata={'source': 'mammal-pets-doc'}, page_content='Dogs are great companions, known for their loyalty and friendliness.'),
3 Document(metadata={'source': 'mammal-pets-doc'}, page_content='Rabbits are social animals that need plenty of space to hop around.'),
4 Document(metadata={'source': 'fish-pets-doc'}, page_content='Goldfish are popular pets for beginners, requiring relatively simple care.')]

返回相似度分数:

1# Note that providers implement different scores; Chroma here
2# returns a distance metric that should vary inversely with
3# similarity.
4# 请注意,提供者实现了不同的评分;Chroma 在这里返回的距离度量应该与相似度成反比
5vectorstore.similarity_search_with_score("cat")
1[(Document(metadata={'source': 'mammal-pets-doc'}, page_content='Cats are independent pets that often enjoy their own space.'),
2  1.2406271696090698),
3 (Document(metadata={'source': 'mammal-pets-doc'}, page_content='Dogs are great companions, known for their loyalty and friendliness.'),
4  1.550119400024414),
5 (Document(metadata={'source': 'mammal-pets-doc'}, page_content='Rabbits are social animals that need plenty of space to hop around.'),
6  1.6296132802963257),
7 (Document(metadata={'source': 'fish-pets-doc'}, page_content='Goldfish are popular pets for beginners, requiring relatively simple care.'),
8  1.7069131135940552)]

根据与嵌入查询的相似度返回文档:

1embedding = OpenAIEmbeddings(model=EMBEDDING_MODEL_NAME).embed_query("cat")
2
3vectorstore.similarity_search_by_vector(embedding)
1[Document(metadata={'source': 'mammal-pets-doc'}, page_content='Cats are independent pets that often enjoy their own space.'),
2 Document(metadata={'source': 'mammal-pets-doc'}, page_content='Dogs are great companions, known for their loyalty and friendliness.'),
3 Document(metadata={'source': 'mammal-pets-doc'}, page_content='Rabbits are social animals that need plenty of space to hop around.'),
4 Document(metadata={'source': 'fish-pets-doc'}, page_content='Goldfish are popular pets for beginners, requiring relatively simple care.')]

了解更多:

检索器(Retrievers) #

LangChain VectorStore对象不继承Runnable,因此不能直接集成到LangChain表达式语言链中。

LangChain Retrievers是Runnable,因此它们实现了一组标准方法(例如,同步和异步invoke以及批量batch操作),并且旨在集成到LCEL链中。

我们可以自己创建一个简单的版本,而不需要继承Retriever。如果我们选择用于检索文档的方法,我们可以轻松地创建一个Runnable的对象。下面我们将在similarity_search方法周围构建一个:

1from langchain_core.documents import Document
2from langchain_core.runnables import RunnableLambda
3
4retriever = RunnableLambda(vectorstore.similarity_search).bind(k=1)  # select top result
5
6retriever.batch(["cat", "shark"])

API Reference:Document | RunnableLambda

1[[Document(metadata={'source': 'mammal-pets-doc'}, page_content='Cats are independent pets that often enjoy their own space.')],
2 [Document(metadata={'source': 'fish-pets-doc'}, page_content='Goldfish are popular pets for beginners, requiring relatively simple care.')]]

Vectorstores实现了一个as_retriever方法,该方法可以生成一个检索器Retriever,具体来说是一个VectorStoreRetriever。这些检索器包括特定的search_typesearch_kwargs属性,这些属性标识了要调用的底层向量存储的方法以及如何参数化它们。例如,我们可以使用以下代码实现上述内容:

1retriever = vectorstore.as_retriever(
2    search_type="similarity",
3    search_kwargs={"k": 1},
4)
5
6retriever.batch(["cat", "shark"])
1[[Document(metadata={'source': 'mammal-pets-doc'}, page_content='Cats are independent pets that often enjoy their own space.')],
2 [Document(metadata={'source': 'fish-pets-doc'}, page_content='Goldfish are popular pets for beginners, requiring relatively simple care.')]]

VectorStoreRetriever支持“similarity相似度”(默认)、“mmr”(最大边界相关性,如上所述)和“similarity_score_threshold相似度得分阈值”的搜索类型。我们可以使用后者根据相似度得分对检索器输出的文档进行阈值处理。

检索器可以轻松地集成到更复杂的应用中,例如结合一个给定问题和检索到的上下文到一个大语言模型(LLM)提示的检索增强生成(RAG)应用。下面我们展示了一个最小示例。

1from langchain_openai import ChatOpenAI
2llm = ChatOpenAI(model=MODEL_NAME)
 1from langchain_core.prompts import ChatPromptTemplate
 2from langchain_core.runnables import RunnablePassthrough
 3
 4message = """
 5Answer this question using the provided context only.
 6
 7{question}
 8
 9Context:
10{context}
11"""
12
13prompt = ChatPromptTemplate.from_messages([("human", message)])
14rag_chain = {"context": retriever, "question": RunnablePassthrough()} | prompt | llm

API Reference: ChatPromptTemplate | RunnablePassthrough

1response = rag_chain.invoke("tell me about cats")
2
3print(response.content)
1Cats are independent pets that often enjoy their own space.

了解更多 #

检索策略可以很丰富和复杂。例如:

检索器部分的使用指南涵盖了这些和其他内置的检索策略。

扩展BaseRetriever类以实现自定义检索器也很容易。请参阅我们的使用指南。

© 2024 青蛙小白
comments powered by Disqus