Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
436 changes: 5 additions & 431 deletions docs/agents.md

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions docs/dependencies.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Dependencies

PydanticAI uses a dependency injection system to provide data and services to your agent's [system prompts](agents.md#system-prompts), [tools](agents.md#function-tools) and [result validators](results.md#result-validators-functions).
PydanticAI uses a dependency injection system to provide data and services to your agent's [system prompts](agents.md#system-prompts), [tools](tools.md) and [result validators](results.md#result-validators-functions).

Matching PydanticAI's design philosophy, our dependency system tries to use existing best practice in Python development rather than inventing esoteric "magic", this should make dependencies type-safe, understandable easier to test and ultimately easier to deploy in production.

Expand Down Expand Up @@ -101,7 +101,7 @@ _(This example is complete, it can be run "as is")_

### Asynchronous vs. Synchronous dependencies

[System prompt functions](agents.md#system-prompts), [function tools](agents.md#function-tools) and [result validators](results.md#result-validators-functions) are all run in the async context of an agent run.
[System prompt functions](agents.md#system-prompts), [function tools](tools.md) and [result validators](results.md#result-validators-functions) are all run in the async context of an agent run.

If these functions are not coroutines (e.g. `async def`) they are called with
[`run_in_executor`][asyncio.loop.run_in_executor] in a thread pool, it's therefore marginally preferable
Expand Down Expand Up @@ -158,7 +158,7 @@ _(This example is complete, it can be run "as is")_

## Full Example

As well as system prompts, dependencies can be used in [tools](agents.md#function-tools) and [result validators](results.md#result-validators-functions).
As well as system prompts, dependencies can be used in [tools](tools.md) and [result validators](results.md#result-validators-functions).

```python {title="full_example.py" hl_lines="27-35 38-48"}
from dataclasses import dataclass
Expand Down Expand Up @@ -275,7 +275,7 @@ async def application_code(prompt: str) -> str: # (3)!
3. Application code that calls the agent, in a real application this might be an API endpoint.
4. Call the agent from within the application code, in a real application this call might be deep within a call stack. Note `app_deps` here will NOT be used when deps are overridden.

```python {title="test_joke_app.py" hl_lines="10-12"}
```python {title="test_joke_app.py" hl_lines="10-12" call_name="test_application_code"}
from joke_app import MyDeps, application_code, joke_agent


Expand Down
2 changes: 1 addition & 1 deletion docs/examples/bank-support.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Demonstrates:

* [dynamic system prompt](../agents.md#system-prompts)
* [structured `result_type`](../results.md#structured-result-validation)
* [tools](../agents.md#function-tools)
* [tools](../tools.md)

## Running the Example

Expand Down
2 changes: 1 addition & 1 deletion docs/examples/rag.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ RAG search example. This demo allows you to ask question of the [logfire](https:

Demonstrates:

* [tools](../agents.md#function-tools)
* [tools](../tools.md)
* [agent dependencies](../dependencies.md)
* RAG search

Expand Down
2 changes: 1 addition & 1 deletion docs/examples/weather-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Example of PydanticAI with multiple tools which the LLM needs to call in turn to

Demonstrates:

* [tools](../agents.md#function-tools)
* [tools](../tools.md)
* [agent dependencies](../dependencies.md)
* [streaming text responses](../results.md#streaming-text)

Expand Down
6 changes: 3 additions & 3 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,11 +123,11 @@ async def main():

1. This [agent](agents.md) will act as first-tier support in a bank. Agents are generic in the type of dependencies they accept and the type of result they return. In this case, the support agent has type `#!python Agent[SupportDependencies, SupportResult]`.
2. Here we configure the agent to use [OpenAI's GPT-4o model](api/models/openai.md), you can also set the model when running the agent.
3. The `SupportDependencies` dataclass is used to pass data, connections, and logic into the model that will be needed when running [system prompt](agents.md#system-prompts) and [tool](agents.md#function-tools) functions. PydanticAI's system of dependency injection provides a [type-safe](agents.md#static-type-checking) way to customise the behavior of your agents, and can be especially useful when running [unit tests](testing-evals.md) and evals.
3. The `SupportDependencies` dataclass is used to pass data, connections, and logic into the model that will be needed when running [system prompt](agents.md#system-prompts) and [tool](tools.md) functions. PydanticAI's system of dependency injection provides a [type-safe](agents.md#static-type-checking) way to customise the behavior of your agents, and can be especially useful when running [unit tests](testing-evals.md) and evals.
4. Static [system prompts](agents.md#system-prompts) can be registered with the [`system_prompt` keyword argument][pydantic_ai.Agent.__init__] to the agent.
5. Dynamic [system prompts](agents.md#system-prompts) can be registered with the [`@agent.system_prompt`][pydantic_ai.Agent.system_prompt] decorator, and can make use of dependency injection. Dependencies are carried via the [`RunContext`][pydantic_ai.tools.RunContext] argument, which is parameterized with the `deps_type` from above. If the type annotation here is wrong, static type checkers will catch it.
6. [`tool`](agents.md#function-tools) let you register functions which the LLM may call while responding to a user. Again, dependencies are carried via [`RunContext`][pydantic_ai.tools.RunContext], any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
7. The docstring of a tool is also passed to the LLM as the description of the tool. Parameter descriptions are [extracted](agents.md#function-tools-and-schema) from the docstring and added to the parameter schema sent to the LLM.
6. [`tool`](tools.md) let you register functions which the LLM may call while responding to a user. Again, dependencies are carried via [`RunContext`][pydantic_ai.tools.RunContext], any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
7. The docstring of a tool is also passed to the LLM as the description of the tool. Parameter descriptions are [extracted](tools.md#function-tools-and-schema) from the docstring and added to the parameter schema sent to the LLM.
8. [Run the agent](agents.md#running-agents) asynchronously, conducting a conversation with the LLM until a final response is reached. Even in this fairly simple case, the agent will exchange multiple messages with the LLM as tools are called to retrieve a result.
9. The response from the agent will, be guaranteed to be a `SupportResult`, if validation fails [reflection](agents.md#reflection-and-self-correction) will mean the agent is prompted to try again.
10. The result will be validated with Pydantic to guarantee it is a `SupportResult`, since the agent is generic, it'll also be typed as a `SupportResult` to aid with static type checking.
Expand Down
6 changes: 3 additions & 3 deletions docs/results.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ There two main challenges with streamed results:

Example of streamed text result:

```python {title="streamed_hello_world.py"}
```python {title="streamed_hello_world.py" line_length="120"}
from pydantic_ai import Agent

agent = Agent('gemini-1.5-flash') # (1)!
Expand Down Expand Up @@ -224,7 +224,7 @@ Not all types are supported with partial validation in Pydantic, see [pydantic/p

Here's an example of streaming a use profile as it's built:

```python {title="streamed_user_profile.py"}
```python {title="streamed_user_profile.py" line_length="120"}
from datetime import date

from typing_extensions import TypedDict
Expand Down Expand Up @@ -263,7 +263,7 @@ _(This example is complete, it can be run "as is")_

If you want fine-grained control of validation, particularly catching validation errors, you can use the following pattern:

```python {title="streamed_user_profile.py"}
```python {title="streamed_user_profile.py" line_length="120"}
from datetime import date

from pydantic import ValidationError
Expand Down
6 changes: 3 additions & 3 deletions docs/testing-evals.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Unless you're really sure you know better, you'll probably want to follow roughl
The simplest and fastest way to exercise most of your application code is using [`TestModel`][pydantic_ai.models.test.TestModel], this will (by default) call all tools in the agent, then return either plain text or a structured response depending on the return type of the agent.

!!! note "`TestModel` is not magic"
The "clever" (but not too clever) part of `TestModel` is that it will attempt to generate valid structured data for [function tools](agents.md#function-tools) and [result types](results.md#structured-result-validation) based on the schema of the registered tools.
The "clever" (but not too clever) part of `TestModel` is that it will attempt to generate valid structured data for [function tools](tools.md) and [result types](results.md#structured-result-validation) based on the schema of the registered tools.

There's no ML or AI in `TestModel`, it's just plain old procedural Python code that tries to generate data that satisfies the JSON schema of a tool.

Expand Down Expand Up @@ -89,7 +89,7 @@ Here we have a function that takes a list of `#!python (user_prompt, user_id)` t

Here's how we would write tests using [`TestModel`][pydantic_ai.models.test.TestModel]:

```python {title="test_weather_app.py"}
```python {title="test_weather_app.py" call_name="test_forecast"}
from datetime import timezone
import pytest

Expand Down Expand Up @@ -182,7 +182,7 @@ To fully exercise `weather_forecast`, we need to use [`FunctionModel`][pydantic_

Here's an example of using `FunctionModel` to test the `weather_forecast` tool with custom inputs

```python {title="test_weather_app2.py"}
```python {title="test_weather_app2.py" call_name="test_forecast_future"}
import re

import pytest
Expand Down
Loading
Loading