You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+30-5Lines changed: 30 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ PydanticAI is a Python Agent Framework designed to make it less painful to build
40
40
* Novel, type-safe [dependency injection system](https://ai.pydantic.dev/dependencies/), useful for testing and eval-driven iterative development
41
41
*[Logfire integration](https://ai.pydantic.dev/logfire/) for debugging and monitoring the performance and general behavior of your LLM-powered application
42
42
43
-
## example "In Beta"
43
+
## In Beta!
44
44
45
45
PydanticAI is in early beta, the API is still subject to change and there's a lot more to do.
46
46
[Feedback](https://github.com/pydantic/pydantic-ai/issues) is very welcome!
@@ -52,11 +52,16 @@ Here's a minimal example of PydanticAI:
52
52
```py
53
53
from pydantic_ai import Agent
54
54
55
-
agent = Agent( # (1)!
55
+
# Define a very simple agent including the model to use, you can also set the model when running the agent.
56
+
agent = Agent(
56
57
'gemini-1.5-flash',
58
+
# Register a static system prompt using a keyword argument to the agent.
59
+
# For more complex dynamically-generated system prompts, see the example below.
57
60
system_prompt='Be concise, reply with one sentence.',
58
61
)
59
62
63
+
# Run the agent synchronously, conducting a conversation with the LLM.
64
+
# Here the exchange should be very short: PydanticAI will send the system prompt and the user query to the LLM, the model will return a text response. See below for a more complex run.
60
65
result = agent.run_sync('Where does "hello world" come from?')
61
66
print(result.data)
62
67
"""
@@ -83,21 +88,29 @@ from pydantic_ai import Agent, RunContext
83
88
from bank_database import DatabaseConn
84
89
85
90
91
+
# SupportDependencies is used to pass data, connections, and logic into the model that will be needed when running
92
+
# system prompt and tool functions. Dependency injection provides a type-safe way to customise the behavior of your agents.
86
93
@dataclass
87
94
classSupportDependencies:
88
95
customer_id: int
89
96
db: DatabaseConn
90
97
91
98
99
+
# This pydantic model defines the structure of the result returned by the agent.
92
100
classSupportResult(BaseModel):
93
101
support_advice: str= Field(description='Advice returned to the customer')
94
102
block_card: bool= Field(description="Whether to block the customer's card")
95
103
risk: int= Field(description='Risk level of query', ge=0, le=10)
96
104
97
105
106
+
# This agent will act as first-tier support in a bank.
107
+
# Agents are generic in the type of dependencies they accept and the type of result they return.
108
+
# In this case, the support agent has type `Agent[SupportDependencies, SupportResult]`.
98
109
support_agent = Agent(
99
110
'openai:gpt-4o',
100
111
deps_type=SupportDependencies,
112
+
# The response from the agent will, be guaranteed to be a SupportResult,
113
+
# if validation fails the agent is prompted to try again.
101
114
result_type=SupportResult,
102
115
system_prompt=(
103
116
'You are a support agent in our bank, give the '
@@ -106,30 +119,42 @@ support_agent = Agent(
106
119
)
107
120
108
121
122
+
# Dynamic system prompts can can make use of dependency injection.
123
+
# Dependencies are carried via the `RunContext` argument, which is parameterized with the `deps_type` from above.
124
+
# If the type annotation here is wrong, static type checkers will catch it.
"""Returns the customer's current account balance."""# (7)!
100
-
balance =await ctx.deps.db.customer_balance(
100
+
returnawait ctx.deps.db.customer_balance(
101
101
id=ctx.deps.customer_id,
102
102
include_pending=include_pending,
103
103
)
104
-
returnf'${balance:.2f}'
105
104
106
105
107
106
...# (11)!
@@ -127,8 +126,8 @@ async def main():
127
126
3. The `SupportDependencies` dataclass is used to pass data, connections, and logic into the model that will be needed when running [system prompt](agents.md#system-prompts) and [tool](agents.md#function-tools) functions. PydanticAI's system of dependency injection provides a [type-safe](agents.md#static-type-checking) way to customise the behavior of your agents, and can be especially useful when running [unit tests](testing-evals.md) and evals.
128
127
4. Static [system prompts](agents.md#system-prompts) can be registered with the [`system_prompt` keyword argument][pydantic_ai.Agent.__init__] to the agent.
129
128
5. Dynamic [system prompts](agents.md#system-prompts) can be registered with the [`@agent.system_prompt`][pydantic_ai.Agent.system_prompt] decorator, and can make use of dependency injection. Dependencies are carried via the [`RunContext`][pydantic_ai.dependencies.RunContext] argument, which is parameterized with the `deps_type` from above. If the type annotation here is wrong, static type checkers will catch it.
130
-
6.[`tool`](agents.md#function-tools) let you register functions which the LLM may call while responding to a user. Again, dependencies are carried via [`RunContext`][pydantic_ai.dependencies.RunContext], and any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
131
-
7. The docstring of a tool is also passed to the LLM as the description of the tool. Parameter descriptions are [extracted](agents.md#function-tools-and-schema) from the docstring and added to the tool schema sent to the LLM.
129
+
6.[`tool`](agents.md#function-tools) let you register functions which the LLM may call while responding to a user. Again, dependencies are carried via [`RunContext`][pydantic_ai.dependencies.RunContext], any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
130
+
7. The docstring of a tool is also passed to the LLM as the description of the tool. Parameter descriptions are [extracted](agents.md#function-tools-and-schema) from the docstring and added to the parameter schema sent to the LLM.
132
131
8.[Run the agent](agents.md#running-agents) asynchronously, conducting a conversation with the LLM until a final response is reached. Even in this fairly simple case, the agent will exchange multiple messages with the LLM as tools are called to retrieve a result.
133
132
9. The response from the agent will, be guaranteed to be a `SupportResult`, if validation fails [reflection](agents.md#reflection-and-self-correction) will mean the agent is prompted to try again.
134
133
10. The result will be validated with Pydantic to guarantee it is a `SupportResult`, since the agent is generic, it'll also be typed as a `SupportResult` to aid with static type checking.
0 commit comments