作为一名AI产品经理,Langgraph的魅力在于它提供了一种新的方式来构建AI应用。他用轮询的方式构建应用,我觉得比传统的Agent更符合直觉。
https://github.langchain.ac.cn/langgraph/

调用方式也很简单,也支持多Agent,也有持久化层,也支持人机交互。构建应用的DEMO是很好的选择。尤其在工具调用方面,MCP的出现也是为了解决工具调用的问题,目前Langgraph也支持了MCP。这对开发者来说是一个很好的工具。

单个Agent可以使用少量工具在一个域内有效运行,但即使像很强大模型也会遇到多个工具效率降低的问题。解决复杂任务的一种方法是使用“分而治之”的方法:为每个任务或域创建一个专门的代理,并将任务路由到正确的“专家”。这在langgraph上很容易实现。

我将在这个文档中放一些常用的调用代码。有一些重要的功能我不会放在文档里,因为理解起来比较简单,我只放置一些重要的代码和图。

Langgraph+多代理

https://langchain-ai.github.io/langgraph/#multi-agent

每个代理可以被表示为图中的一个节点,该节点执行代理步骤并决定接下来要做什么 - 完成执行或路由到另一个代理(包括路由到自身,例如循环运行)。多代理架构中一种常见的路由模式是交接。交接允许您指定:1. 下一个要导航到的代理(例如,要去的节点名称)2. 要传递给该代理的信息(例如,状态更新)。
其实都在官网能找到,我只是方便搬运一下。

1
2
%%capture --no-stderr
%pip install -U langgraph langchain-openai
1
2
3
4
5
6
7
8
9
10
11
12
13

import getpass
import os


def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")


_set_env("OPENAI_API_KEY")

装LLM的API,没什么好说的

旅行推荐示例
我们将创建3个代理:
travel_advisor:可以提供一般旅行目的地的推荐,可以向sightseeing_advisor和hotel_advisor请求帮助。
sightseeing_advisor:可以提供观光推荐,可以向travel_advisor和hotel_advisor请求帮助。
hotel_advisor:可以提供酒店推荐,可以向sightseeing_advisor和travel_advisor请求帮助。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112

from typing_extensions import TypedDict, Literal

from langchain_openai import ChatOpenAI
from langchain_core.messages import AnyMessage
from langgraph.graph import MessagesState, StateGraph, START, END
from langgraph.types import Command

model = ChatOpenAI(model="gpt-4o")


# 为每个代理节点定义一个帮助程序以供调用。
def call_llm(messages: list[AnyMessage], target_agent_nodes: list[str]):
"""Call LLM with structured output to get a natural language response as well as a target agent (node) to go to next.

Args:
messages: list of messages to pass to the LLM
target_agents: list of the node names of the target agents to navigate to
"""
# 定义JSON架构以便于结构化输出:
# - model's text response (`response`)
# - name of the node to go to next (or 'finish')
# see more on structured output here https://python.langchain.com/docs/concepts/structured_outputs
json_schema = {
"name": "Response",
"parameters": {
"type": "object",
"properties": {
"response": {
"type": "string",
"description": "A human readable response to the original question. Does not need to be a final response. Will be streamed back to the user.",
},
"goto": {
"enum": [*target_agent_nodes, "__end__"],
"type": "string",
"description": "The next agent to call, or __end__ if the user's query has been resolved. Must be one of the specified values.",
},
},
"required": ["response", "goto"],
},
}
response = model.with_structured_output(json_schema).invoke(messages)
return response


def travel_advisor(
state: MessagesState,
) -> Command[Literal["sightseeing_advisor", "hotel_advisor", "__end__"]]:
system_prompt = (
"You are a general travel expert that can recommend travel destinations (e.g. countries, cities, etc). "
"If you need specific sightseeing recommendations, ask 'sightseeing_advisor' for help. "
"If you need hotel recommendations, ask 'hotel_advisor' for help. "
"If you have enough information to respond to the user, return 'finish'. "
"Never mention other agents by name."
)
messages = [{"role": "system", "content": system_prompt}] + state["messages"]
target_agent_nodes = ["sightseeing_advisor", "hotel_advisor"]
response = call_llm(messages, target_agent_nodes)
ai_msg = {"role": "ai", "content": response["response"], "name": "travel_advisor"}
# handoff to another agent or halt
return Command(goto=response["goto"], update={"messages": ai_msg})


def sightseeing_advisor(
state: MessagesState,
) -> Command[Literal["travel_advisor", "hotel_advisor", "__end__"]]:
system_prompt = (
"You are a travel expert that can provide specific sightseeing recommendations for a given destination. "
"If you need general travel help, go to 'travel_advisor' for help. "
"If you need hotel recommendations, go to 'hotel_advisor' for help. "
"If you have enough information to respond to the user, return 'finish'. "
"Never mention other agents by name."
)
messages = [{"role": "system", "content": system_prompt}] + state["messages"]
target_agent_nodes = ["travel_advisor", "hotel_advisor"]
response = call_llm(messages, target_agent_nodes)
ai_msg = {
"role": "ai",
"content": response["response"],
"name": "sightseeing_advisor",
}
# handoff to another agent or halt
return Command(goto=response["goto"], update={"messages": ai_msg})


def hotel_advisor(
state: MessagesState,
) -> Command[Literal["travel_advisor", "sightseeing_advisor", "__end__"]]:
system_prompt = (
"You are a travel expert that can provide hotel recommendations for a given destination. "
"If you need general travel help, ask 'travel_advisor' for help. "
"If you need specific sightseeing recommendations, ask 'sightseeing_advisor' for help. "
"If you have enough information to respond to the user, return 'finish'. "
"Never mention other agents by name."
)
messages = [{"role": "system", "content": system_prompt}] + state["messages"]
target_agent_nodes = ["travel_advisor", "sightseeing_advisor"]
response = call_llm(messages, target_agent_nodes)
ai_msg = {"role": "ai", "content": response["response"], "name": "hotel_advisor"}
# handoff to another agent or halt
return Command(goto=response["goto"], update={"messages": ai_msg})


builder = StateGraph(MessagesState)
builder.add_node("travel_advisor", travel_advisor)
builder.add_node("sightseeing_advisor", sightseeing_advisor)
builder.add_node("hotel_advisor", hotel_advisor)
# we'll always start with a general travel advisor
builder.add_edge(START, "travel_advisor")

graph = builder.compile()

代码 builder.add_node() 和 builder.add_edge() 是构建图的两种方式。一个构建点,一个构建边。

image.png

Langgraph的人工干预

https://www.aidoczh.com/langgraph/how-tos/#_4

这是产品非常重要的一个功能!人们什么时候纠正、纠正什么、纠正后AI怎么继续处理,这在产品端是非常重要的。

举个例子,一个AI客服在跟用户通话过程中,遇到了打断,人类自然很容易接上继续沟通,如果AI重复或者忘记,用户会很不爽。但这点好像langgraph也没办法很好解决,它遇到中断会重新执行当前节点。所以对节点的划分就很重要。

添加断点

添加断点可以通过几种方式实现,但主要支持的方式是在节点执行之前添加一个“中断”。这会在该节点中断执行。然后,您可以从该位置恢复以继续。