AutoGen笔记:02.工具使用
📅 2025-02-20 | 🖱️
通过本节的学习,将学会如何使用AutoGen定义智能体并调用函数和工具。
在这个例子中,我们将赋予智能体一个工具的访问权限,该工具是一个函数,包含可用度假目的地的列表及其度假目的地的可用性。 可以认为这是一个旅行社的智能体可以访问旅行数据库的场景。
项目依赖 #
1poetry add python-dotenv
2poetry add autogen-agentchat
3poetry add "autogen-ext[openai]"
定义函数(Functions) #
1def vacation_destination(city: str) -> tuple[str, str]:
2 """
3 Checks if a specific vacation destination is available
4
5 Args:
6 city (str): Name of the city to check
7
8 Returns:
9 tuple: Contains city name and availability status ('Available' or 'Unavailable')
10 """
11 destinations = {
12 "Barcelona": "Available",
13 "Tokyo": "Unavailable",
14 "Cape Town": "Available",
15 "Vancouver": "Available",
16 "Dubai": "Unavailable",
17 }
18
19 if city in destinations:
20 return city, destinations[city]
21 else:
22 return city, "City not found"
定义FunctionTool #
要让智能体将vacation_destinations
用作FunctionTool
,我们需要将其定义为一个FunctionTool
。我们还将提供该工具的描述,这有助于智能体识别该工具的用途,以便完成用户请求的任务。
1from autogen_core.tools import FunctionTool
2
3get_vacations = FunctionTool(
4 vacation_destination,
5 description="Search for vacation destinations and if they are availabe or not.",
6)
定义智能体 #
现在我们可以用下面的代码创建智能体。我们定义了system_message,以向智能体提供关于如何帮助用户查找度假目的地的指令。
我们还将reflect_on_tool_use参数设置为true。这允许我们使用LLM来获取工具调用的响应,并使用自然语言发送响应。
你可以将此参数设置为false来查看差异。
1from autogen_agentchat.agents import AssistantAgent
2
3agent = AssistantAgent(
4 name="assistant",
5 model_client=client,
6 tools=[get_vacations],
7 system_message="You are a travel agent that helps users find vacation destinations.",
8 reflect_on_tool_use=True,
9)
运行智能体 #
现在我们可以使用初始用户消息(要求去东京旅行)来运行智能体。你可以更改此城市目的地,以查看智能体如何响应城市的可用性。
完整代码:
1import asyncio
2from dotenv import load_dotenv
3from autogen_ext.models.openai import OpenAIChatCompletionClient
4from autogen_core.tools import FunctionTool
5from autogen_agentchat.agents import AssistantAgent
6from autogen_agentchat.messages import TextMessage
7from autogen_core import CancellationToken
8
9load_dotenv()
10client = OpenAIChatCompletionClient(model="gpt-4o")
11
12
13def vacation_destination(city: str) -> tuple[str, str]:
14 """
15 Checks if a specific vacation destination is available
16
17 Args:
18 city (str): Name of the city to check
19
20 Returns:
21 tuple: Contains city name and availability status ('Available' or 'Unavailable')
22 """
23 destinations = {
24 "Barcelona": "Available",
25 "Tokyo": "Unavailable",
26 "Cape Town": "Available",
27 "Vancouver": "Available",
28 "Dubai": "Unavailable",
29 }
30
31 if city in destinations:
32 return city, destinations[city]
33 else:
34 return city, "City not found"
35
36
37get_vacations = FunctionTool(
38 vacation_destination,
39 description="Search for vacation destinations and if they are availabe or not.",
40)
41
42agent = AssistantAgent(
43 name="assistant",
44 model_client=client,
45 tools=[get_vacations],
46 system_message="You are a travel agent that helps users find vacation destinations.",
47 reflect_on_tool_use=True,
48)
49
50
51async def main():
52 response = await agent.on_messages(
53 [TextMessage(content="I would like to take a trip to Tokyo", source="user")],
54 cancellation_token=CancellationToken(),
55 )
56 print(response.inner_messages)
57 print(response.chat_message)
58
59
60asyncio.run(main())
执行结果:
1[ToolCallRequestEvent(source='assistant', models_usage=RequestUsage(prompt_tokens=78, completion_tokens=16), content=[FunctionCall(id='call_lpPgiefSjGHIF3BmduZzQcYh', arguments='{"city":"Tokyo"}', name='vacation_destination')], type='ToolCallRequestEvent'), ToolCallExecutionEvent(source='assistant', models_usage=None, content=[FunctionExecutionResult(content="('Tokyo', 'Unavailable')", call_id='call_lpPgiefSjGHIF3BmduZzQcYh', is_error=False)], type='ToolCallExecutionEvent')]
2source='assistant' models_usage=RequestUsage(prompt_tokens=66, completion_tokens=62) content='It looks like detailed recommendations for Tokyo are currently unavailable. However, I’d be happy to share general advice! Tokyo is a vibrant city with a mix of ancient traditions and cutting-edge modernity. Is there something specific you’re looking for in your trip—like food, culture, shopping, or something else?' type='TextMessage'
AssistantAgent reflect_on_tool_use
- 当
reflect_on_tool_use
为False(默认值)时,工具调用结果将作为ToolCallSummaryMessage
返回到chat_message
中。可以使用tool_call_summary_format
来自定义工具调用摘要。1[ToolCallRequestEvent(source='assistant', models_usage=RequestUsage(prompt_tokens=78, completion_tokens=16), content=[FunctionCall(id='call_0hxE8cgdy6rVIo3aIr5iXo7y', arguments='{"city":"Tokyo"}', name='vacation_destination')], type='ToolCallRequestEvent'), ToolCallExecutionEvent(source='assistant', models_usage=None, content=[FunctionExecutionResult(content="('Tokyo', 'Unavailable')", call_id='call_0hxE8cgdy6rVIo3aIr5iXo7y', is_error=False)], type='ToolCallExecutionEvent')] 2source='assistant' models_usage=None content="('Tokyo', 'Unavailable')" type='ToolCallSummaryMessage'
- 当
reflect_on_tool_use
为True时,将使用工具调用和结果进行另一次模型推理,并将文本响应作为TextMessage
返回到chat_message
中。1[ToolCallRequestEvent(source='assistant', models_usage=RequestUsage(prompt_tokens=78, completion_tokens=16), content=[FunctionCall(id='call_pEfelVuaWOJK20iIgjzGQ7Us', arguments='{"city":"Tokyo"}', name='vacation_destination')], type='ToolCallRequestEvent'), ToolCallExecutionEvent(source='assistant', models_usage=None, content=[FunctionExecutionResult(content="('Tokyo', 'Unavailable')", call_id='call_pEfelVuaWOJK20iIgjzGQ7Us', is_error=False)], type='ToolCallExecutionEvent')] 2source='assistant' models_usage=RequestUsage(prompt_tokens=66, completion_tokens=40) content='It seems like I don’t have detailed information for Tokyo at the moment. Do you have specific preferences or activities you’re interested in? I can help you explore similar destinations or guide you in planning!' type='TextMessage'
总结 #
- 模型客户端
autogen_ext.models.openai
OpenAIChatCompletionClient
- 工具定义
autogen_core.tools
FunctionTool
- 智能体定义
autogen_agentchat.agents
AssistantAgent
- 消息定义
autogen_agentchat.messages
TextMessage
FunctionTool
通过包装标准Python函数来创建自定义工具。
FunctionTool提供了一个接口,用于异步或同步执行Python函数。每个函数都必须包含其所有参数和返回类型的类型注解。这些注解使FunctionTool能够生成必要的schema,用于输入验证、序列化,以及告知LLM预期参数。当LLM准备函数调用时,它会利用此schema生成与函数规范对齐的参数。
注意:用户有责任验证工具的输出类型是否与预期类型匹配。