1
1
# LangChain Integration
2
2
3
- The agent-memory-client provides seamless integration with LangChain, eliminating the need for manual tool wrapping . This integration automatically converts memory client tools into LangChain-compatible ` StructuredTool ` instances .
3
+ The Python SDK ( agent-memory-client) provides a LangChain integration that helps you use the memory server with LangChain applications . This integration automatically converts memory operations into LangChain-compatible tools .
4
4
5
- ## Why Use This Integration?
5
+ ## Memory Tools for LangChain
6
6
7
- ### Before (Manual Wrapping) ❌
7
+ The SDK provides a ` get_memory_tools() ` function that returns a list of LangChain ` StructuredTool ` instances. These tools give your LangChain LLMs and agents access to the memory server's capabilities.
8
8
9
- Users had to manually wrap every memory tool with LangChain's ` @tool ` decorator:
9
+ For details on available memory operations, see the [ Tool Methods ] ( python-sdk.md# tool-methods ) section of the Python SDK documentation.
10
10
11
- ``` python
12
- from langchain_core.tools import tool
13
-
14
- @tool
15
- async def create_long_term_memory (memories : List[dict ]) -> str :
16
- """ Store important information in long-term memory."""
17
- result = await memory_client.resolve_function_call(
18
- function_name = " create_long_term_memory" ,
19
- args = {" memories" : memories},
20
- session_id = session_id,
21
- user_id = student_id
22
- )
23
- return f " ✅ Stored { len (memories)} memory(ies): { result} "
24
-
25
- @tool
26
- async def search_long_term_memory (text : str , limit : int = 5 ) -> str :
27
- """ Search for relevant memories using semantic search."""
28
- result = await memory_client.resolve_function_call(
29
- function_name = " search_long_term_memory" ,
30
- args = {" text" : text, " limit" : limit},
31
- session_id = session_id,
32
- user_id = student_id
33
- )
34
- return str (result)
35
-
36
- # ... repeat for every tool you want to use
37
- ```
38
-
39
- ** Problems:**
40
- - Tedious boilerplate code
41
- - Error-prone (easy to forget session_id, user_id, etc.)
42
- - Hard to maintain
43
- - Duplicates logic across projects
44
-
45
- ### After (Automatic Integration) ✅
11
+ ### Direct LLM Integration
46
12
47
- With the LangChain integration, you get all tools with one function call :
13
+ You can bind memory tools directly to a LangChain LLM :
48
14
49
15
``` python
16
+ from agent_memory_client import create_memory_client
50
17
from agent_memory_client.integrations.langchain import get_memory_tools
18
+ from langchain_openai import ChatOpenAI
19
+ from langchain_core.tools import StructuredTool
20
+
21
+ # Initialize the memory client
22
+ memory_client = await create_memory_client(" http://localhost:8000" )
51
23
52
- tools = get_memory_tools(
24
+ # Get memory tools as LangChain StructuredTool instances
25
+ tools: list[StructuredTool] = get_memory_tools(
53
26
memory_client = memory_client,
54
- session_id = session_id ,
55
- user_id = user_id
27
+ session_id = " user_session_123 " ,
28
+ user_id = " alice "
56
29
)
57
30
58
- # That's it! All tools are ready to use with LangChain agents
31
+ # Bind tools to an LLM
32
+ llm = ChatOpenAI(model = " gpt-4o" )
33
+ llm_with_tools = llm.bind_tools(tools)
34
+
35
+ # Use the LLM with memory capabilities
36
+ response = await llm_with_tools.ainvoke(
37
+ " Remember that I prefer morning meetings and I work remotely"
38
+ )
39
+ print (response)
59
40
```
60
41
61
- ** Benefits:**
62
- - ✅ No manual wrapping needed
63
- - ✅ Automatic type conversion and validation
64
- - ✅ Session and user context automatically injected
65
- - ✅ Works seamlessly with LangChain agents
66
- - ✅ Consistent behavior across all tools
42
+ The LLM can now automatically use memory tools to store and retrieve information during conversations.
67
43
68
44
## Installation
69
45
70
- The LangChain integration requires ` langchain-core ` :
46
+ Install the Python SDK with LangChain support :
71
47
72
48
``` bash
73
49
pip install agent-memory-client langchain-core
74
50
```
75
51
76
- For the full LangChain experience with agents :
52
+ For LangChain agents and LangGraph :
77
53
78
54
``` bash
79
- pip install agent-memory-client langchain langchain-openai
55
+ pip install agent-memory-client langchain langchain-openai langgraph
80
56
```
81
57
82
- ## Quick Start
58
+ ## Using with LangChain
83
59
84
60
Here's a complete example of creating a memory-enabled LangChain agent:
85
61
86
62
``` python
87
- import asyncio
88
63
from agent_memory_client import create_memory_client
89
64
from agent_memory_client.integrations.langchain import get_memory_tools
90
65
from langchain.agents import create_tool_calling_agent, AgentExecutor
91
66
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
67
+ from langchain_core.tools import StructuredTool
92
68
from langchain_openai import ChatOpenAI
93
69
94
- async def main ():
95
- # 1. Initialize memory client
96
- memory_client = await create_memory_client(" http://localhost:8000" )
97
-
98
- # 2. Get LangChain-compatible tools (automatic conversion!)
99
- tools = get_memory_tools(
100
- memory_client = memory_client,
101
- session_id = " my_session" ,
102
- user_id = " alice"
103
- )
104
-
105
- # 3. Create LangChain agent
106
- llm = ChatOpenAI(model = " gpt-4o" )
107
- prompt = ChatPromptTemplate.from_messages([
108
- (" system" , " You are a helpful assistant with persistent memory." ),
109
- (" human" , " {input} " ),
110
- MessagesPlaceholder(" agent_scratchpad" ),
111
- ])
70
+ # Initialize memory client
71
+ memory_client = await create_memory_client(" http://localhost:8000" )
112
72
113
- agent = create_tool_calling_agent(llm, tools, prompt)
114
- executor = AgentExecutor(agent = agent, tools = tools)
115
-
116
- # 4. Use the agent
117
- result = await executor.ainvoke({
118
- " input" : " Remember that I love pizza and work at TechCorp"
119
- })
120
- print (result[" output" ])
121
-
122
- # Later conversation - agent can recall the information
123
- result = await executor.ainvoke({
124
- " input" : " What do you know about my food preferences?"
125
- })
126
- print (result[" output" ])
127
-
128
- await memory_client.close()
73
+ # Get memory tools
74
+ tools: list[StructuredTool] = get_memory_tools(
75
+ memory_client = memory_client,
76
+ session_id = " my_session" ,
77
+ user_id = " alice"
78
+ )
129
79
130
- asyncio.run(main())
80
+ # Create LangChain agent
81
+ llm = ChatOpenAI(model = " gpt-4o" )
82
+ prompt = ChatPromptTemplate.from_messages([
83
+ (" system" , " You are a helpful assistant with persistent memory." ),
84
+ (" human" , " {input} " ),
85
+ MessagesPlaceholder(" agent_scratchpad" ),
86
+ ])
87
+
88
+ agent = create_tool_calling_agent(llm, tools, prompt)
89
+ executor = AgentExecutor(agent = agent, tools = tools)
90
+
91
+ # Use the agent
92
+ result = await executor.ainvoke({
93
+ " input" : " Remember that I love pizza and work at TechCorp"
94
+ })
95
+ print (result[" output" ])
96
+
97
+ # Later conversation - agent can recall the information
98
+ result = await executor.ainvoke({
99
+ " input" : " What do you know about my food preferences?"
100
+ })
101
+ print (result[" output" ])
131
102
```
132
103
133
- ## API Reference
104
+ ## Using with LangGraph
134
105
135
- ### ` get_memory_tools() `
136
-
137
- Convert memory client tools to LangChain-compatible tools.
138
-
139
- ``` python
140
- def get_memory_tools (
141
- memory_client : MemoryAPIClient,
142
- session_id : str ,
143
- user_id : str | None = None ,
144
- namespace : str | None = None ,
145
- tools : Sequence[str ] | Literal[" all" ] = " all" ,
146
- ) -> list[StructuredTool]:
147
- ```
148
-
149
- ** Parameters:**
150
-
151
- - ` memory_client ` (MemoryAPIClient): Initialized memory client instance
152
- - ` session_id ` (str): Session ID for working memory operations
153
- - ` user_id ` (str | None): Optional user ID for memory operations
154
- - ` namespace ` (str | None): Optional namespace for memory operations
155
- - ` tools ` (Sequence[ str] | "all"): Which tools to include (default: "all")
156
-
157
- ** Returns:**
158
-
159
- List of LangChain ` StructuredTool ` instances ready to use with agents.
160
-
161
- ** Available Tools:**
162
-
163
- - ` search_memory ` - Search long-term memory using semantic search
164
- - ` get_or_create_working_memory ` - Get current working memory state
165
- - ` add_memory_to_working_memory ` - Store new structured memories
166
- - ` update_working_memory_data ` - Update session data
167
- - ` get_long_term_memory ` - Retrieve specific memory by ID
168
- - ` create_long_term_memory ` - Create long-term memories directly
169
- - ` edit_long_term_memory ` - Update existing memories
170
- - ` delete_long_term_memories ` - Delete memories permanently
171
- - ` get_current_datetime ` - Get current UTC datetime
172
-
173
- ## Usage Examples
174
-
175
- ### Example 1: All Memory Tools
176
-
177
- Get all available memory tools:
106
+ You can use memory tools in LangGraph workflows:
178
107
179
108
``` python
109
+ from agent_memory_client import create_memory_client
180
110
from agent_memory_client.integrations.langchain import get_memory_tools
111
+ from langchain_core.tools import StructuredTool
112
+ from langchain_openai import ChatOpenAI
113
+ from langgraph.prebuilt import create_react_agent
181
114
182
- tools = get_memory_tools(
183
- memory_client = client,
184
- session_id = " chat_session" ,
115
+ # Initialize memory client
116
+ memory_client = await create_memory_client(" http://localhost:8000" )
117
+
118
+ # Get memory tools
119
+ tools: list[StructuredTool] = get_memory_tools(
120
+ memory_client = memory_client,
121
+ session_id = " langgraph_session" ,
185
122
user_id = " alice"
186
123
)
187
124
188
- # Returns all 9 memory tools
189
- print (f " Created { len (tools)} tools " )
125
+ # Create a LangGraph agent with memory tools
126
+ llm = ChatOpenAI(model = " gpt-4o" )
127
+ graph = create_react_agent(llm, tools)
128
+
129
+ # Use the agent
130
+ result = await graph.ainvoke({
131
+ " messages" : [(" user" , " Remember that I'm learning Python and prefer visual examples" )]
132
+ })
133
+ print (result[" messages" ][- 1 ].content)
134
+
135
+ # Continue the conversation
136
+ result = await graph.ainvoke({
137
+ " messages" : [(" user" , " What programming language am I learning?" )]
138
+ })
139
+ print (result[" messages" ][- 1 ].content)
190
140
```
191
141
192
- ### Example 2: Selective Tools
142
+ ## Advanced Usage
143
+
144
+ ### Selective Tools
193
145
194
146
Get only specific tools you need:
195
147
196
148
``` python
197
- tools = get_memory_tools(
149
+ from agent_memory_client.integrations.langchain import get_memory_tools
150
+ from langchain_core.tools import StructuredTool
151
+
152
+ tools: list[StructuredTool] = get_memory_tools(
198
153
memory_client = client,
199
154
session_id = " chat_session" ,
200
155
user_id = " alice" ,
201
156
tools = [" search_memory" , " create_long_term_memory" ]
202
157
)
203
-
204
- # Returns only the 2 specified tools
205
158
```
206
159
207
- ### Example 3: Combining with Custom Tools
160
+ ### Combining with Custom Tools
208
161
209
162
Combine memory tools with your own custom tools:
210
163
@@ -245,15 +198,18 @@ agent = create_tool_calling_agent(llm, all_tools, prompt)
245
198
executor = AgentExecutor(agent = agent, tools = all_tools)
246
199
```
247
200
248
- ### Example 4: Multi-User Application
201
+ ### Multi-User Application
249
202
250
203
Handle multiple users with different sessions:
251
204
252
205
``` python
206
+ from agent_memory_client.integrations.langchain import get_memory_tools
207
+ from langchain_core.tools import StructuredTool
208
+
253
209
async def create_user_agent (user_id : str , session_id : str ):
254
210
""" Create a memory-enabled agent for a specific user."""
255
211
256
- tools = get_memory_tools(
212
+ tools: list[StructuredTool] = get_memory_tools(
257
213
memory_client = shared_memory_client,
258
214
session_id = session_id,
259
215
user_id = user_id,
@@ -279,69 +235,8 @@ await alice_agent.ainvoke({"input": "I love pizza"})
279
235
await bob_agent.ainvoke({" input" : " I love sushi" })
280
236
```
281
237
282
- ## Advanced Usage
283
-
284
- ### Custom Tool Selection
285
-
286
- Choose exactly which memory capabilities your agent needs:
287
-
288
- ``` python
289
- # Minimal agent - only search and create
290
- minimal_tools = get_memory_tools(
291
- memory_client = client,
292
- session_id = " minimal" ,
293
- user_id = " user" ,
294
- tools = [" search_memory" , " create_long_term_memory" ]
295
- )
296
-
297
- # Read-only agent - only search
298
- readonly_tools = get_memory_tools(
299
- memory_client = client,
300
- session_id = " readonly" ,
301
- user_id = " user" ,
302
- tools = [" search_memory" , " get_long_term_memory" ]
303
- )
304
-
305
- # Full control agent - all tools
306
- full_tools = get_memory_tools(
307
- memory_client = client,
308
- session_id = " full" ,
309
- user_id = " user" ,
310
- tools = " all"
311
- )
312
- ```
313
-
314
- ### Error Handling
315
-
316
- The integration handles errors gracefully:
317
-
318
- ``` python
319
- try :
320
- tools = get_memory_tools(
321
- memory_client = client,
322
- session_id = " session" ,
323
- user_id = " user" ,
324
- tools = [" invalid_tool_name" ] # This will raise ValueError
325
- )
326
- except ValueError as e:
327
- print (f " Invalid tool selection: { e} " )
328
- ```
329
-
330
- ## Comparison with Direct SDK Usage
331
-
332
- | Feature | Direct SDK | LangChain Integration |
333
- | ---------| -----------| ----------------------|
334
- | Setup complexity | Low | Very Low |
335
- | Tool wrapping | Manual | Automatic |
336
- | Type safety | Manual | Automatic |
337
- | Context injection | Manual | Automatic |
338
- | Agent compatibility | Requires wrapping | Native |
339
- | Code maintenance | High | Low |
340
- | Best for | Custom workflows | LangChain agents |
341
-
342
238
## See Also
343
239
240
+ - [ Python SDK Documentation] ( python-sdk.md ) - Complete SDK reference and tool methods
344
241
- [ Memory Integration Patterns] ( memory-integration-patterns.md ) - Overview of different integration approaches
345
- - [ Python SDK] ( python-sdk.md ) - Direct SDK usage without LangChain
346
- - [ Agent Examples] ( agent-examples.md ) - More agent implementation examples
347
242
- [ LangChain Integration Example] ( https://github.com/redis/agent-memory-server/blob/main/examples/langchain_integration_example.py ) - Complete working example
0 commit comments