Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 59 additions & 15 deletions docs/agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ print(result.data)
```

1. Create an agent, which expects an integer dependency and returns a boolean result. This agent will have type `#!python Agent[int, bool]`.
2. Define a tool that checks if the square is a winner. Here [`RunContext`][pydantic_ai.dependencies.RunContext] is parameterized with the dependency type `int`; if you got the dependency type wrong you'd get a typing error.
2. Define a tool that checks if the square is a winner. Here [`RunContext`][pydantic_ai.tools.RunContext] is parameterized with the dependency type `int`; if you got the dependency type wrong you'd get a typing error.
3. In reality, you might want to use a random number here e.g. `random.randint(0, 36)`.
4. `result.data` will be a boolean indicating if the square is a winner. Pydantic performs the result validation, it'll be typed as a `bool` since its type is derived from the `result_type` generic parameter of the agent.

Expand Down Expand Up @@ -161,7 +161,7 @@ def foobar(x: bytes) -> None:
pass


result = agent.run_sync('Does their name start with "A"?', deps=User('Adam'))
result = agent.run_sync('Does their name start with "A"?', deps=User('Anne'))
foobar(result.data) # (3)!
```

Expand Down Expand Up @@ -222,7 +222,7 @@ print(result.data)

1. The agent expects a string dependency.
2. Static system prompt defined at agent creation time.
3. Dynamic system prompt defined via a decorator with [`RunContext`][pydantic_ai.dependencies.RunContext], this is called just after `run_sync`, not when the agent is created, so can benefit from runtime information like the dependencies used on that run.
3. Dynamic system prompt defined via a decorator with [`RunContext`][pydantic_ai.tools.RunContext], this is called just after `run_sync`, not when the agent is created, so can benefit from runtime information like the dependencies used on that run.
4. Another dynamic system prompt, system prompts don't have to have the `RunContext` parameter.

_(This example is complete, it can be run "as is")_
Expand All @@ -238,12 +238,13 @@ They're useful when it is impractical or impossible to put all the context an ag

The main semantic difference between PydanticAI Tools and RAG is RAG is synonymous with vector search, while PydanticAI tools are more general-purpose. (Note: we may add support for vector search functionality in the future, particularly an API for generating embeddings. See [#58](https://github.com/pydantic/pydantic-ai/issues/58))

There are two different decorator functions to register tools:
There are a number of ways to register tools with an agent:

1. [`@agent.tool`][pydantic_ai.Agent.tool] — for tools that need access to the agent [context][pydantic_ai.dependencies.RunContext]
2. [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] — for tools that do not need access to the agent [context][pydantic_ai.dependencies.RunContext]
* via the [`@agent.tool`][pydantic_ai.Agent.tool] decorator — for tools that need access to the agent [context][pydantic_ai.tools.RunContext]
* via the [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] decorator — for tools that do not need access to the agent [context][pydantic_ai.tools.RunContext]
* via the [`tools`][pydantic_ai.Agent.__init__] keyword argument to `Agent` which can take either plain functions, or instances of [`Tool`][pydantic_ai.tools.Tool]

`@agent.tool` is the default since in the majority of cases tools will need access to the agent context.
`@agent.tool` is considered the default decorator since in the majority of cases tools will need access to the agent context.

Here's an example using both:

Expand Down Expand Up @@ -275,9 +276,9 @@ def get_player_name(ctx: RunContext[str]) -> str:
return ctx.deps


dice_result = agent.run_sync('My guess is 4', deps='Adam') # (5)!
dice_result = agent.run_sync('My guess is 4', deps='Anne') # (5)!
print(dice_result.data)
#> Congratulations Adam, you guessed correctly! You're a winner!
#> Congratulations Anne, you guessed correctly! You're a winner!
```

1. This is a pretty simple task, so we can use the fast and cheap Gemini flash model.
Expand Down Expand Up @@ -330,13 +331,13 @@ print(dice_result.all_messages())
),
ToolReturn(
tool_name='get_player_name',
content='Adam',
content='Anne',
tool_id=None,
timestamp=datetime.datetime(...),
role='tool-return',
),
ModelTextResponse(
content="Congratulations Adam, you guessed correctly! You're a winner!",
content="Congratulations Anne, you guessed correctly! You're a winner!",
timestamp=datetime.datetime(...),
role='model-text-response',
),
Expand Down Expand Up @@ -370,16 +371,59 @@ sequenceDiagram
deactivate LLM
activate Agent
Note over Agent: Retrieves player name
Agent -->> LLM: ToolReturn<br>"Adam"
Agent -->> LLM: ToolReturn<br>"Anne"
deactivate Agent
activate LLM
Note over LLM: LLM constructs final response

LLM ->> Agent: ModelTextResponse<br>"Congratulations Adam, ..."
LLM ->> Agent: ModelTextResponse<br>"Congratulations Anne, ..."
deactivate LLM
Note over Agent: Game session complete
```

### Registering Function Tools via kwarg

As well as using the decorators, we can register tools via the `tools` argument to the [`Agent` constructor][pydantic_ai.Agent.__init__]. This is useful when you want to re-use tools, and can also give more fine-grained control over the tools.

```py title="dice_game_tool_kwarg.py"
import random

from pydantic_ai import Agent, RunContext, Tool


def roll_die() -> str:
"""Roll a six-sided die and return the result."""
return str(random.randint(1, 6))


def get_player_name(ctx: RunContext[str]) -> str:
"""Get the player's name."""
return ctx.deps


agent_a = Agent(
'gemini-1.5-flash',
deps_type=str,
tools=[roll_die, get_player_name], # (1)!
)
agent_b = Agent(
'gemini-1.5-flash',
deps_type=str,
tools=[ # (2)!
Tool(roll_die, takes_ctx=False),
Tool(get_player_name, takes_ctx=True),
],
)
dice_result = agent_b.run_sync('My guess is 4', deps='Anne')
print(dice_result.data)
#> Congratulations Anne, you guessed correctly! You're a winner!
```

1. The simplest way to register tools via the `Agent` constructor is to pass a list of functions, the function signature is inspected to determine if the tool takes [`RunContext`][pydantic_ai.tools.RunContext].
2. `agent_a` and `agent_b` are identical — but we can use [`Tool`][pydantic_ai.tools.Tool] to give more fine-grained control over how tools are defined, e.g. setting their name or description.

_(This example is complete, it can be run "as is")_

### Function Tools vs. Structured Results

As the name suggests, function tools use the model's "tools" or "functions" API to let the model know what is available to call. Tools or functions are also used to define the schema(s) for structured responses, thus a model might have access to many tools, some of which call function tools while others end the run and return a result.
Expand Down Expand Up @@ -445,7 +489,7 @@ agent.run_sync('hello', model=FunctionModel(print_schema))

_(This example is complete, it can be run "as is")_

The return type of tool can be any valid JSON object ([`JsonData`][pydantic_ai.dependencies.JsonData]) as some models (e.g. Gemini) support semi-structured return values, some expect text (OpenAI) but seem to be just as good at extracting meaning from the data. If a Python object is returned and the model expects a string, the value will be serialized to JSON.
The return type of tool can be any valid JSON object ([`JsonData`][pydantic_ai.tools.JsonData]) as some models (e.g. Gemini) support semi-structured return values, some expect text (OpenAI) but seem to be just as good at extracting meaning from the data. If a Python object is returned and the model expects a string, the value will be serialized to JSON.

If a tool has a single parameter that can be represented as an object in JSON schema (e.g. dataclass, TypedDict, pydantic model), the schema for the tool is simplified to be just that object. (TODO example)

Expand All @@ -456,7 +500,7 @@ Validation errors from both function tool parameter validation and [structured r
You can also raise [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] from within a [tool](#function-tools) or [result validator function](results.md#result-validators-functions) to tell the model it should retry generating a response.

- The default retry count is **1** but can be altered for the [entire agent][pydantic_ai.Agent.__init__], a [specific tool][pydantic_ai.Agent.tool], or a [result validator][pydantic_ai.Agent.__init__].
- You can access the current retry count from within a tool or result validator via [`ctx.retry`][pydantic_ai.dependencies.RunContext].
- You can access the current retry count from within a tool or result validator via [`ctx.retry`][pydantic_ai.tools.RunContext].

Here's an example:

Expand Down
3 changes: 0 additions & 3 deletions docs/api/dependencies.md

This file was deleted.

3 changes: 3 additions & 0 deletions docs/api/tools.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# `pydantic_ai.tools`

::: pydantic_ai.tools
10 changes: 5 additions & 5 deletions docs/dependencies.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ _(This example is complete, it can be run "as is")_

## Accessing Dependencies

Dependencies are accessed through the [`RunContext`][pydantic_ai.dependencies.RunContext] type, this should be the first parameter of system prompt functions etc.
Dependencies are accessed through the [`RunContext`][pydantic_ai.tools.RunContext] type, this should be the first parameter of system prompt functions etc.


```py title="system_prompt_dependencies.py" hl_lines="20-27"
Expand Down Expand Up @@ -92,10 +92,10 @@ async def main():
#> Did you hear about the toothpaste scandal? They called it Colgate.
```

1. [`RunContext`][pydantic_ai.dependencies.RunContext] may optionally be passed to a [`system_prompt`][pydantic_ai.Agent.system_prompt] function as the only argument.
2. [`RunContext`][pydantic_ai.dependencies.RunContext] is parameterized with the type of the dependencies, if this type is incorrect, static type checkers will raise an error.
3. Access dependencies through the [`.deps`][pydantic_ai.dependencies.RunContext.deps] attribute.
4. Access dependencies through the [`.deps`][pydantic_ai.dependencies.RunContext.deps] attribute.
1. [`RunContext`][pydantic_ai.tools.RunContext] may optionally be passed to a [`system_prompt`][pydantic_ai.Agent.system_prompt] function as the only argument.
2. [`RunContext`][pydantic_ai.tools.RunContext] is parameterized with the type of the dependencies, if this type is incorrect, static type checkers will raise an error.
3. Access dependencies through the [`.deps`][pydantic_ai.tools.RunContext.deps] attribute.
4. Access dependencies through the [`.deps`][pydantic_ai.tools.RunContext.deps] attribute.

_(This example is complete, it can be run "as is")_

Expand Down
4 changes: 2 additions & 2 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,8 +125,8 @@ async def main():
2. Here we configure the agent to use [OpenAI's GPT-4o model](api/models/openai.md), you can also set the model when running the agent.
3. The `SupportDependencies` dataclass is used to pass data, connections, and logic into the model that will be needed when running [system prompt](agents.md#system-prompts) and [tool](agents.md#function-tools) functions. PydanticAI's system of dependency injection provides a [type-safe](agents.md#static-type-checking) way to customise the behavior of your agents, and can be especially useful when running [unit tests](testing-evals.md) and evals.
4. Static [system prompts](agents.md#system-prompts) can be registered with the [`system_prompt` keyword argument][pydantic_ai.Agent.__init__] to the agent.
5. Dynamic [system prompts](agents.md#system-prompts) can be registered with the [`@agent.system_prompt`][pydantic_ai.Agent.system_prompt] decorator, and can make use of dependency injection. Dependencies are carried via the [`RunContext`][pydantic_ai.dependencies.RunContext] argument, which is parameterized with the `deps_type` from above. If the type annotation here is wrong, static type checkers will catch it.
6. [`tool`](agents.md#function-tools) let you register functions which the LLM may call while responding to a user. Again, dependencies are carried via [`RunContext`][pydantic_ai.dependencies.RunContext], any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
5. Dynamic [system prompts](agents.md#system-prompts) can be registered with the [`@agent.system_prompt`][pydantic_ai.Agent.system_prompt] decorator, and can make use of dependency injection. Dependencies are carried via the [`RunContext`][pydantic_ai.tools.RunContext] argument, which is parameterized with the `deps_type` from above. If the type annotation here is wrong, static type checkers will catch it.
6. [`tool`](agents.md#function-tools) let you register functions which the LLM may call while responding to a user. Again, dependencies are carried via [`RunContext`][pydantic_ai.tools.RunContext], any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
7. The docstring of a tool is also passed to the LLM as the description of the tool. Parameter descriptions are [extracted](agents.md#function-tools-and-schema) from the docstring and added to the parameter schema sent to the LLM.
8. [Run the agent](agents.md#running-agents) asynchronously, conducting a conversation with the LLM until a final response is reached. Even in this fairly simple case, the agent will exchange multiple messages with the LLM as tools are called to retrieve a result.
9. The response from the agent will, be guaranteed to be a `SupportResult`, if validation fails [reflection](agents.md#reflection-and-self-correction) will mean the agent is prompted to try again.
Expand Down
2 changes: 1 addition & 1 deletion docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ To run the examples, follow instructions in the [examples docs](examples/index.m

## Slim Install

If you know which model you're going to use and want to avoid installing superfluous package, you can use the [`pydantic-ai-slim`](https://pypi.org/project/pydantic-ai-slim/) package.
If you know which model you're going to use and want to avoid installing superfluous packages, you can use the [`pydantic-ai-slim`](https://pypi.org/project/pydantic-ai-slim/) package.

If you're using just [`OpenAIModel`][pydantic_ai.models.openai.OpenAIModel], run:

Expand Down
2 changes: 1 addition & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,9 @@ nav:
- examples/chat-app.md
- API Reference:
- api/agent.md
- api/tools.md
- api/result.md
- api/messages.md
- api/dependencies.md
- api/exceptions.md
- api/models/base.md
- api/models/openai.md
Expand Down
4 changes: 2 additions & 2 deletions pydantic_ai_slim/pydantic_ai/__init__.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
from importlib.metadata import version

from .agent import Agent
from .dependencies import RunContext
from .exceptions import ModelRetry, UnexpectedModelBehavior, UserError
from .tools import RunContext, Tool

__all__ = 'Agent', 'RunContext', 'ModelRetry', 'UnexpectedModelBehavior', 'UserError', '__version__'
__all__ = 'Agent', 'Tool', 'RunContext', 'ModelRetry', 'UnexpectedModelBehavior', 'UserError', '__version__'
__version__ = version('pydantic_ai_slim')
38 changes: 27 additions & 11 deletions pydantic_ai_slim/pydantic_ai/_pydantic.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
from __future__ import annotations as _annotations

from inspect import Parameter, signature
from typing import TYPE_CHECKING, Any, TypedDict, cast, get_origin
from typing import TYPE_CHECKING, Any, Callable, TypedDict, cast, get_origin

from pydantic import ConfigDict, TypeAdapter
from pydantic._internal import _decorators, _generate_schema, _typing_extra
Expand All @@ -20,8 +20,7 @@
from ._utils import ObjectJsonSchema, check_object_json_schema, is_model_like

if TYPE_CHECKING:
from . import _tool
from .dependencies import AgentDeps, ToolParams
pass


__all__ = 'function_schema', 'LazyTypeAdapter'
Expand All @@ -39,17 +38,16 @@ class FunctionSchema(TypedDict):
var_positional_field: str | None


def function_schema(either_function: _tool.ToolEitherFunc[AgentDeps, ToolParams]) -> FunctionSchema: # noqa: C901
def function_schema(function: Callable[..., Any], takes_ctx: bool) -> FunctionSchema: # noqa: C901
"""Build a Pydantic validator and JSON schema from a tool function.

Args:
either_function: The function to build a validator and JSON schema for.
function: The function to build a validator and JSON schema for.
takes_ctx: Whether the function takes a `RunContext` first argument.

Returns:
A `FunctionSchema` instance.
"""
function = either_function.whichever()
takes_ctx = either_function.is_left()
config = ConfigDict(title=function.__name__)
config_wrapper = ConfigWrapper(config)
gen_schema = _generate_schema.GenerateSchema(config_wrapper)
Expand Down Expand Up @@ -78,13 +76,13 @@ def function_schema(either_function: _tool.ToolEitherFunc[AgentDeps, ToolParams]

if index == 0 and takes_ctx:
if not _is_call_ctx(annotation):
errors.append('First argument must be a RunContext instance when using `.tool`')
errors.append('First parameter of tools that take context must be annotated with RunContext[...]')
continue
elif not takes_ctx and _is_call_ctx(annotation):
errors.append('RunContext instance can only be used with `.tool`')
errors.append('RunContext annotations can only be used with tools that take context')
continue
elif index != 0 and _is_call_ctx(annotation):
errors.append('RunContext instance can only be used as the first argument')
errors.append('RunContext annotations can only be used as the first argument')
continue

field_name = p.name
Expand Down Expand Up @@ -159,6 +157,24 @@ def function_schema(either_function: _tool.ToolEitherFunc[AgentDeps, ToolParams]
)


def takes_ctx(function: Callable[..., Any]) -> bool:
"""Check if a function takes a `RunContext` first argument.

Args:
function: The function to check.

Returns:
`True` if the function takes a `RunContext` as first argument, `False` otherwise.
"""
sig = signature(function)
try:
_, first_param = next(iter(sig.parameters.items()))
except StopIteration:
return False
else:
return first_param.annotation is not sig.empty and _is_call_ctx(first_param.annotation)


def _build_schema(
fields: dict[str, core_schema.TypedDictField],
var_kwargs_schema: core_schema.CoreSchema | None,
Expand Down Expand Up @@ -191,7 +207,7 @@ def _build_schema(


def _is_call_ctx(annotation: Any) -> bool:
from .dependencies import RunContext
from .tools import RunContext

return annotation is RunContext or (
_typing_extra.is_generic_alias(annotation) and get_origin(annotation) is RunContext
Expand Down
2 changes: 1 addition & 1 deletion pydantic_ai_slim/pydantic_ai/_result.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@
from typing_extensions import Self, TypeAliasType, TypedDict

from . import _utils, messages
from .dependencies import AgentDeps, ResultValidatorFunc, RunContext
from .exceptions import ModelRetry
from .messages import ModelStructuredResponse, ToolCall
from .result import ResultData
from .tools import AgentDeps, ResultValidatorFunc, RunContext


@dataclass
Expand Down
2 changes: 1 addition & 1 deletion pydantic_ai_slim/pydantic_ai/_system_prompt.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
from typing import Any, Callable, Generic, cast

from . import _utils
from .dependencies import AgentDeps, RunContext, SystemPromptFunc
from .tools import AgentDeps, RunContext, SystemPromptFunc


@dataclass
Expand Down
Loading
Loading