diff --git a/docs/agents.md b/docs/agents.md
index 74444077f3..f640a65482 100644
--- a/docs/agents.md
+++ b/docs/agents.md
@@ -8,7 +8,7 @@ but multiple agents can also interact to embody more complex workflows.
The [`Agent`][pydantic_ai.Agent] class has full API documentation, but conceptually you can think of an agent as a container for:
* A [system prompt](#system-prompts) — a set of instructions for the LLM written by the developer
-* One or more [retrieval tool](#function-tools) — functions that the LLM may call to get information while generating a response
+* One or more [function tool](tools.md) — functions that the LLM may call to get information while generating a response
* An optional structured [result type](results.md) — the structured datatype the LLM must return at the end of a run
* A [dependency](dependencies.md) type constraint — system prompt functions, tools and result validators may all use dependencies when they're run
* Agents may optionally also have a default [LLM model](api/models/base.md) associated with them; the model to use can also be specified when running the agent
@@ -97,6 +97,7 @@ You can also pass messages from previous runs to continue a conversation or prov
Before you execute any agent runs, do the following:
```python {test="skip" lint="skip"}
import nest_asyncio
+
nest_asyncio.apply()
```
@@ -237,439 +238,11 @@ print(result.data)
_(This example is complete, it can be run "as is")_
-## Function Tools
-
-Function tools provide a mechanism for models to retrieve extra information to help them generate a response.
-
-They're useful when it is impractical or impossible to put all the context an agent might need into the system prompt, or when you want to make agents' behavior more deterministic or reliable by deferring some of the logic required to generate a response to another (not necessarily AI-powered) tool.
-
-!!! info "Function tools vs. RAG"
- Function tools are basically the "R" of RAG (Retrieval-Augmented Generation) — they augment what the model can do by letting it request extra information.
-
- The main semantic difference between PydanticAI Tools and RAG is RAG is synonymous with vector search, while PydanticAI tools are more general-purpose. (Note: we may add support for vector search functionality in the future, particularly an API for generating embeddings. See [#58](https://github.com/pydantic/pydantic-ai/issues/58))
-
-There are a number of ways to register tools with an agent:
-
-* via the [`@agent.tool`][pydantic_ai.Agent.tool] decorator — for tools that need access to the agent [context][pydantic_ai.tools.RunContext]
-* via the [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] decorator — for tools that do not need access to the agent [context][pydantic_ai.tools.RunContext]
-* via the [`tools`][pydantic_ai.Agent.__init__] keyword argument to `Agent` which can take either plain functions, or instances of [`Tool`][pydantic_ai.tools.Tool]
-
-`@agent.tool` is considered the default decorator since in the majority of cases tools will need access to the agent context.
-
-Here's an example using both:
-
-```python {title="dice_game.py"}
-import random
-
-from pydantic_ai import Agent, RunContext
-
-agent = Agent(
- 'gemini-1.5-flash', # (1)!
- deps_type=str, # (2)!
- system_prompt=(
- "You're a dice game, you should roll the die and see if the number "
- "you get back matches the user's guess. If so, tell them they're a winner. "
- "Use the player's name in the response."
- ),
-)
-
-
-@agent.tool_plain # (3)!
-def roll_die() -> str:
- """Roll a six-sided die and return the result."""
- return str(random.randint(1, 6))
-
-
-@agent.tool # (4)!
-def get_player_name(ctx: RunContext[str]) -> str:
- """Get the player's name."""
- return ctx.deps
-
-
-dice_result = agent.run_sync('My guess is 4', deps='Anne') # (5)!
-print(dice_result.data)
-#> Congratulations Anne, you guessed correctly! You're a winner!
-```
-
-1. This is a pretty simple task, so we can use the fast and cheap Gemini flash model.
-2. We pass the user's name as the dependency, to keep things simple we use just the name as a string as the dependency.
-3. This tool doesn't need any context, it just returns a random number. You could probably use a dynamic system prompt in this case.
-4. This tool needs the player's name, so it uses `RunContext` to access dependencies which are just the player's name in this case.
-5. Run the agent, passing the player's name as the dependency.
-
-_(This example is complete, it can be run "as is")_
-
-Let's print the messages from that game to see what happened:
-
-```python {title="dice_game_messages.py"}
-from dice_game import dice_result
-
-print(dice_result.all_messages())
-"""
-[
- SystemPrompt(
- content="You're a dice game, you should roll the die and see if the number you get back matches the user's guess. If so, tell them they're a winner. Use the player's name in the response.",
- role='system',
- ),
- UserPrompt(
- content='My guess is 4',
- timestamp=datetime.datetime(...),
- role='user',
- ),
- ModelStructuredResponse(
- calls=[
- ToolCall(
- tool_name='roll_die', args=ArgsDict(args_dict={}), tool_call_id=None
- )
- ],
- timestamp=datetime.datetime(...),
- role='model-structured-response',
- ),
- ToolReturn(
- tool_name='roll_die',
- content='4',
- tool_call_id=None,
- timestamp=datetime.datetime(...),
- role='tool-return',
- ),
- ModelStructuredResponse(
- calls=[
- ToolCall(
- tool_name='get_player_name',
- args=ArgsDict(args_dict={}),
- tool_call_id=None,
- )
- ],
- timestamp=datetime.datetime(...),
- role='model-structured-response',
- ),
- ToolReturn(
- tool_name='get_player_name',
- content='Anne',
- tool_call_id=None,
- timestamp=datetime.datetime(...),
- role='tool-return',
- ),
- ModelTextResponse(
- content="Congratulations Anne, you guessed correctly! You're a winner!",
- timestamp=datetime.datetime(...),
- role='model-text-response',
- ),
-]
-"""
-```
-
-We can represent this with a diagram:
-
-```mermaid
-sequenceDiagram
- participant Agent
- participant LLM
-
- Note over Agent: Send prompts
- Agent ->> LLM: System: "You're a dice game..."
User: "My guess is 4"
- activate LLM
- Note over LLM: LLM decides to use
a tool
-
- LLM ->> Agent: Call tool
roll_die()
- deactivate LLM
- activate Agent
- Note over Agent: Rolls a six-sided die
-
- Agent -->> LLM: ToolReturn
"4"
- deactivate Agent
- activate LLM
- Note over LLM: LLM decides to use
another tool
-
- LLM ->> Agent: Call tool
get_player_name()
- deactivate LLM
- activate Agent
- Note over Agent: Retrieves player name
- Agent -->> LLM: ToolReturn
"Anne"
- deactivate Agent
- activate LLM
- Note over LLM: LLM constructs final response
-
- LLM ->> Agent: ModelTextResponse
"Congratulations Anne, ..."
- deactivate LLM
- Note over Agent: Game session complete
-```
-
-### Registering Function Tools via kwarg
-
-As well as using the decorators, we can register tools via the `tools` argument to the [`Agent` constructor][pydantic_ai.Agent.__init__]. This is useful when you want to re-use tools, and can also give more fine-grained control over the tools.
-
-```python {title="dice_game_tool_kwarg.py"}
-import random
-
-from pydantic_ai import Agent, RunContext, Tool
-
-
-def roll_die() -> str:
- """Roll a six-sided die and return the result."""
- return str(random.randint(1, 6))
-
-
-def get_player_name(ctx: RunContext[str]) -> str:
- """Get the player's name."""
- return ctx.deps
-
-
-agent_a = Agent(
- 'gemini-1.5-flash',
- deps_type=str,
- tools=[roll_die, get_player_name], # (1)!
-)
-agent_b = Agent(
- 'gemini-1.5-flash',
- deps_type=str,
- tools=[ # (2)!
- Tool(roll_die, takes_ctx=False),
- Tool(get_player_name, takes_ctx=True),
- ],
-)
-dice_result = agent_b.run_sync('My guess is 4', deps='Anne')
-print(dice_result.data)
-#> Congratulations Anne, you guessed correctly! You're a winner!
-```
-
-1. The simplest way to register tools via the `Agent` constructor is to pass a list of functions, the function signature is inspected to determine if the tool takes [`RunContext`][pydantic_ai.tools.RunContext].
-2. `agent_a` and `agent_b` are identical — but we can use [`Tool`][pydantic_ai.tools.Tool] to reuse tool definitions and give more fine-grained control over how tools are defined, e.g. setting their name or description, or using a custom [`prepare`](#tool-prepare) method.
-
-_(This example is complete, it can be run "as is")_
-
-### Function Tools vs. Structured Results
-
-As the name suggests, function tools use the model's "tools" or "functions" API to let the model know what is available to call. Tools or functions are also used to define the schema(s) for structured responses, thus a model might have access to many tools, some of which call function tools while others end the run and return a result.
-
-### Function tools and schema
-
-Function parameters are extracted from the function signature, and all parameters except `RunContext` are used to build the schema for that tool call.
-
-Even better, PydanticAI extracts the docstring from functions and (thanks to [griffe](https://mkdocstrings.github.io/griffe/)) extracts parameter descriptions from the docstring and adds them to the schema.
-
-[Griffe supports](https://mkdocstrings.github.io/griffe/reference/docstrings/#docstrings) extracting parameter descriptions from `google`, `numpy` and `sphinx` style docstrings, and PydanticAI will infer the format to use based on the docstring. We plan to add support in the future to explicitly set the style to use, and warn/error if not all parameters are documented; see [#59](https://github.com/pydantic/pydantic-ai/issues/59).
-
-To demonstrate a tool's schema, here we use [`FunctionModel`][pydantic_ai.models.function.FunctionModel] to print the schema a model would receive:
-
-```python {title="tool_schema.py"}
-from pydantic_ai import Agent
-from pydantic_ai.messages import Message, ModelAnyResponse, ModelTextResponse
-from pydantic_ai.models.function import AgentInfo, FunctionModel
-
-agent = Agent()
-
-
-@agent.tool_plain
-def foobar(a: int, b: str, c: dict[str, list[float]]) -> str:
- """Get me foobar.
-
- Args:
- a: apple pie
- b: banana cake
- c: carrot smoothie
- """
- return f'{a} {b} {c}'
-
-
-def print_schema(messages: list[Message], info: AgentInfo) -> ModelAnyResponse:
- tool = info.function_tools[0]
- print(tool.description)
- #> Get me foobar.
- print(tool.parameters_json_schema)
- """
- {
- 'properties': {
- 'a': {'description': 'apple pie', 'title': 'A', 'type': 'integer'},
- 'b': {'description': 'banana cake', 'title': 'B', 'type': 'string'},
- 'c': {
- 'additionalProperties': {'items': {'type': 'number'}, 'type': 'array'},
- 'description': 'carrot smoothie',
- 'title': 'C',
- 'type': 'object',
- },
- },
- 'required': ['a', 'b', 'c'],
- 'type': 'object',
- 'additionalProperties': False,
- }
- """
- return ModelTextResponse(content='foobar')
-
-
-agent.run_sync('hello', model=FunctionModel(print_schema))
-```
-
-_(This example is complete, it can be run "as is")_
-
-The return type of tool can be anything which Pydantic can serialize to JSON as some models (e.g. Gemini) support semi-structured return values, some expect text (OpenAI) but seem to be just as good at extracting meaning from the data. If a Python object is returned and the model expects a string, the value will be serialized to JSON.
-
-If a tool has a single parameter that can be represented as an object in JSON schema (e.g. dataclass, TypedDict, pydantic model), the schema for the tool is simplified to be just that object.
-
-Here's an example, we use [`TestModel.agent_model_function_tools`][pydantic_ai.models.test.TestModel.agent_model_function_tools] to inspect the tool schema that would be passed to the model.
-
-```python {title="single_parameter_tool.py"}
-from pydantic import BaseModel
-
-from pydantic_ai import Agent
-from pydantic_ai.models.test import TestModel
-
-agent = Agent()
-
-
-class Foobar(BaseModel):
- """This is a Foobar"""
-
- x: int
- y: str
- z: float = 3.14
-
-
-@agent.tool_plain
-def foobar(f: Foobar) -> str:
- return str(f)
-
-
-test_model = TestModel()
-result = agent.run_sync('hello', model=test_model)
-print(result.data)
-#> {"foobar":"x=0 y='a' z=3.14"}
-print(test_model.agent_model_function_tools)
-"""
-[
- ToolDefinition(
- name='foobar',
- description='This is a Foobar',
- parameters_json_schema={
- 'properties': {
- 'x': {'title': 'X', 'type': 'integer'},
- 'y': {'title': 'Y', 'type': 'string'},
- 'z': {'default': 3.14, 'title': 'Z', 'type': 'number'},
- },
- 'required': ['x', 'y'],
- 'title': 'Foobar',
- 'type': 'object',
- },
- outer_typed_dict_key=None,
- )
-]
-"""
-```
-
-_(This example is complete, it can be run "as is")_
-
-### Dynamic Function tools {#tool-prepare}
-
-Tools can optionally be defined with another function: `prepare`, which is called at each step of a run to
-customize the definition of the tool passed to the model, or omit the tool completely from that step.
-
-A `prepare` method can be registered via the `prepare` kwarg to any of the tool registration mechanisms:
-
-* [`@agent.tool`][pydantic_ai.Agent.tool] decorator
-* [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] decorator
-* [`Tool`][pydantic_ai.tools.Tool] dataclass
-
-The `prepare` method, should be of type [`ToolPrepareFunc`][pydantic_ai.tools.ToolPrepareFunc], a function which takes [`RunContext`][pydantic_ai.tools.RunContext] and a pre-built [`ToolDefinition`][pydantic_ai.tools.ToolDefinition], and should either return that `ToolDefinition` with or without modifying it, return a new `ToolDefinition`, or return `None` to indicate this tools should not be registered for that step.
-
-Here's a simple `prepare` method that only includes the tool if the value of the dependency is `42`.
-
-As with the previous example, we use [`TestModel`][pydantic_ai.models.test.TestModel] to demonstrate the behavior without calling a real model.
-
-```python {title="tool_only_if_42.py"}
-from typing import Union
-
-from pydantic_ai import Agent, RunContext
-from pydantic_ai.tools import ToolDefinition
-
-agent = Agent('test')
-
-
-async def only_if_42(
- ctx: RunContext[int], tool_def: ToolDefinition
-) -> Union[ToolDefinition, None]:
- if ctx.deps == 42:
- return tool_def
-
-
-@agent.tool(prepare=only_if_42)
-def hitchhiker(ctx: RunContext[int], answer: str) -> str:
- return f'{ctx.deps} {answer}'
-
-
-result = agent.run_sync('testing...', deps=41)
-print(result.data)
-#> success (no tool calls)
-result = agent.run_sync('testing...', deps=42)
-print(result.data)
-#> {"hitchhiker":"42 a"}
-```
-
-_(This example is complete, it can be run "as is")_
-
-Here's a more complex example where we change the description of the `name` parameter to based on the value of `deps`
-
-For the sake of variation, we create this tool using the [`Tool`][pydantic_ai.tools.Tool] dataclass.
-
-```python {title="customize_name.py"}
-from __future__ import annotations
-
-from typing import Literal
-
-from pydantic_ai import Agent, RunContext
-from pydantic_ai.models.test import TestModel
-from pydantic_ai.tools import Tool, ToolDefinition
-
-
-def greet(name: str) -> str:
- return f'hello {name}'
-
-
-async def prepare_greet(
- ctx: RunContext[Literal['human', 'machine']], tool_def: ToolDefinition
-) -> ToolDefinition | None:
- d = f'Name of the {ctx.deps} to greet.'
- tool_def.parameters_json_schema['properties']['name']['description'] = d
- return tool_def
-
-
-greet_tool = Tool(greet, prepare=prepare_greet)
-test_model = TestModel()
-agent = Agent(test_model, tools=[greet_tool], deps_type=Literal['human', 'machine'])
-
-result = agent.run_sync('testing...', deps='human')
-print(result.data)
-#> {"greet":"hello a"}
-print(test_model.agent_model_function_tools)
-"""
-[
- ToolDefinition(
- name='greet',
- description='',
- parameters_json_schema={
- 'properties': {
- 'name': {
- 'title': 'Name',
- 'type': 'string',
- 'description': 'Name of the human to greet.',
- }
- },
- 'required': ['name'],
- 'type': 'object',
- 'additionalProperties': False,
- },
- outer_typed_dict_key=None,
- )
-]
-"""
-```
-
-_(This example is complete, it can be run "as is")_
-
## Reflection and self-correction
Validation errors from both function tool parameter validation and [structured result validation](results.md#structured-result-validation) can be passed back to the model with a request to retry.
-You can also raise [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] from within a [tool](#function-tools) or [result validator function](results.md#result-validators-functions) to tell the model it should retry generating a response.
+You can also raise [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] from within a [tool](tools.md) or [result validator function](results.md#result-validators-functions) to tell the model it should retry generating a response.
- The default retry count is **1** but can be altered for the [entire agent][pydantic_ai.Agent.__init__], a [specific tool][pydantic_ai.Agent.tool], or a [result validator][pydantic_ai.Agent.__init__].
- You can access the current retry count from within a tool or result validator via [`ctx.retry`][pydantic_ai.tools.RunContext].
@@ -677,11 +250,12 @@ You can also raise [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] from within
Here's an example:
```python {title="tool_retry.py"}
-from fake_database import DatabaseConn
from pydantic import BaseModel
from pydantic_ai import Agent, RunContext, ModelRetry
+from fake_database import DatabaseConn
+
class ChatResult(BaseModel):
user_id: int
diff --git a/docs/dependencies.md b/docs/dependencies.md
index 83dc236b6c..ad6cc5353b 100644
--- a/docs/dependencies.md
+++ b/docs/dependencies.md
@@ -1,6 +1,6 @@
# Dependencies
-PydanticAI uses a dependency injection system to provide data and services to your agent's [system prompts](agents.md#system-prompts), [tools](agents.md#function-tools) and [result validators](results.md#result-validators-functions).
+PydanticAI uses a dependency injection system to provide data and services to your agent's [system prompts](agents.md#system-prompts), [tools](tools.md) and [result validators](results.md#result-validators-functions).
Matching PydanticAI's design philosophy, our dependency system tries to use existing best practice in Python development rather than inventing esoteric "magic", this should make dependencies type-safe, understandable easier to test and ultimately easier to deploy in production.
@@ -101,7 +101,7 @@ _(This example is complete, it can be run "as is")_
### Asynchronous vs. Synchronous dependencies
-[System prompt functions](agents.md#system-prompts), [function tools](agents.md#function-tools) and [result validators](results.md#result-validators-functions) are all run in the async context of an agent run.
+[System prompt functions](agents.md#system-prompts), [function tools](tools.md) and [result validators](results.md#result-validators-functions) are all run in the async context of an agent run.
If these functions are not coroutines (e.g. `async def`) they are called with
[`run_in_executor`][asyncio.loop.run_in_executor] in a thread pool, it's therefore marginally preferable
@@ -158,7 +158,7 @@ _(This example is complete, it can be run "as is")_
## Full Example
-As well as system prompts, dependencies can be used in [tools](agents.md#function-tools) and [result validators](results.md#result-validators-functions).
+As well as system prompts, dependencies can be used in [tools](tools.md) and [result validators](results.md#result-validators-functions).
```python {title="full_example.py" hl_lines="27-35 38-48"}
from dataclasses import dataclass
@@ -275,7 +275,7 @@ async def application_code(prompt: str) -> str: # (3)!
3. Application code that calls the agent, in a real application this might be an API endpoint.
4. Call the agent from within the application code, in a real application this call might be deep within a call stack. Note `app_deps` here will NOT be used when deps are overridden.
-```python {title="test_joke_app.py" hl_lines="10-12"}
+```python {title="test_joke_app.py" hl_lines="10-12" call_name="test_application_code"}
from joke_app import MyDeps, application_code, joke_agent
diff --git a/docs/examples/bank-support.md b/docs/examples/bank-support.md
index 7409674fc0..5a05b87a6b 100644
--- a/docs/examples/bank-support.md
+++ b/docs/examples/bank-support.md
@@ -4,7 +4,7 @@ Demonstrates:
* [dynamic system prompt](../agents.md#system-prompts)
* [structured `result_type`](../results.md#structured-result-validation)
-* [tools](../agents.md#function-tools)
+* [tools](../tools.md)
## Running the Example
diff --git a/docs/examples/rag.md b/docs/examples/rag.md
index e08beddf5a..735a61722c 100644
--- a/docs/examples/rag.md
+++ b/docs/examples/rag.md
@@ -4,7 +4,7 @@ RAG search example. This demo allows you to ask question of the [logfire](https:
Demonstrates:
-* [tools](../agents.md#function-tools)
+* [tools](../tools.md)
* [agent dependencies](../dependencies.md)
* RAG search
diff --git a/docs/examples/weather-agent.md b/docs/examples/weather-agent.md
index 6a0a67f162..4f5d62a20e 100644
--- a/docs/examples/weather-agent.md
+++ b/docs/examples/weather-agent.md
@@ -2,7 +2,7 @@ Example of PydanticAI with multiple tools which the LLM needs to call in turn to
Demonstrates:
-* [tools](../agents.md#function-tools)
+* [tools](../tools.md)
* [agent dependencies](../dependencies.md)
* [streaming text responses](../results.md#streaming-text)
diff --git a/docs/index.md b/docs/index.md
index 240d176a8f..1cf812cd44 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -123,11 +123,11 @@ async def main():
1. This [agent](agents.md) will act as first-tier support in a bank. Agents are generic in the type of dependencies they accept and the type of result they return. In this case, the support agent has type `#!python Agent[SupportDependencies, SupportResult]`.
2. Here we configure the agent to use [OpenAI's GPT-4o model](api/models/openai.md), you can also set the model when running the agent.
-3. The `SupportDependencies` dataclass is used to pass data, connections, and logic into the model that will be needed when running [system prompt](agents.md#system-prompts) and [tool](agents.md#function-tools) functions. PydanticAI's system of dependency injection provides a [type-safe](agents.md#static-type-checking) way to customise the behavior of your agents, and can be especially useful when running [unit tests](testing-evals.md) and evals.
+3. The `SupportDependencies` dataclass is used to pass data, connections, and logic into the model that will be needed when running [system prompt](agents.md#system-prompts) and [tool](tools.md) functions. PydanticAI's system of dependency injection provides a [type-safe](agents.md#static-type-checking) way to customise the behavior of your agents, and can be especially useful when running [unit tests](testing-evals.md) and evals.
4. Static [system prompts](agents.md#system-prompts) can be registered with the [`system_prompt` keyword argument][pydantic_ai.Agent.__init__] to the agent.
5. Dynamic [system prompts](agents.md#system-prompts) can be registered with the [`@agent.system_prompt`][pydantic_ai.Agent.system_prompt] decorator, and can make use of dependency injection. Dependencies are carried via the [`RunContext`][pydantic_ai.tools.RunContext] argument, which is parameterized with the `deps_type` from above. If the type annotation here is wrong, static type checkers will catch it.
-6. [`tool`](agents.md#function-tools) let you register functions which the LLM may call while responding to a user. Again, dependencies are carried via [`RunContext`][pydantic_ai.tools.RunContext], any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
-7. The docstring of a tool is also passed to the LLM as the description of the tool. Parameter descriptions are [extracted](agents.md#function-tools-and-schema) from the docstring and added to the parameter schema sent to the LLM.
+6. [`tool`](tools.md) let you register functions which the LLM may call while responding to a user. Again, dependencies are carried via [`RunContext`][pydantic_ai.tools.RunContext], any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
+7. The docstring of a tool is also passed to the LLM as the description of the tool. Parameter descriptions are [extracted](tools.md#function-tools-and-schema) from the docstring and added to the parameter schema sent to the LLM.
8. [Run the agent](agents.md#running-agents) asynchronously, conducting a conversation with the LLM until a final response is reached. Even in this fairly simple case, the agent will exchange multiple messages with the LLM as tools are called to retrieve a result.
9. The response from the agent will, be guaranteed to be a `SupportResult`, if validation fails [reflection](agents.md#reflection-and-self-correction) will mean the agent is prompted to try again.
10. The result will be validated with Pydantic to guarantee it is a `SupportResult`, since the agent is generic, it'll also be typed as a `SupportResult` to aid with static type checking.
diff --git a/docs/results.md b/docs/results.md
index 121047f435..8bef4af7c6 100644
--- a/docs/results.md
+++ b/docs/results.md
@@ -166,7 +166,7 @@ There two main challenges with streamed results:
Example of streamed text result:
-```python {title="streamed_hello_world.py"}
+```python {title="streamed_hello_world.py" line_length="120"}
from pydantic_ai import Agent
agent = Agent('gemini-1.5-flash') # (1)!
@@ -224,7 +224,7 @@ Not all types are supported with partial validation in Pydantic, see [pydantic/p
Here's an example of streaming a use profile as it's built:
-```python {title="streamed_user_profile.py"}
+```python {title="streamed_user_profile.py" line_length="120"}
from datetime import date
from typing_extensions import TypedDict
@@ -263,7 +263,7 @@ _(This example is complete, it can be run "as is")_
If you want fine-grained control of validation, particularly catching validation errors, you can use the following pattern:
-```python {title="streamed_user_profile.py"}
+```python {title="streamed_user_profile.py" line_length="120"}
from datetime import date
from pydantic import ValidationError
diff --git a/docs/testing-evals.md b/docs/testing-evals.md
index f3300d0f8a..448b3a8d93 100644
--- a/docs/testing-evals.md
+++ b/docs/testing-evals.md
@@ -27,7 +27,7 @@ Unless you're really sure you know better, you'll probably want to follow roughl
The simplest and fastest way to exercise most of your application code is using [`TestModel`][pydantic_ai.models.test.TestModel], this will (by default) call all tools in the agent, then return either plain text or a structured response depending on the return type of the agent.
!!! note "`TestModel` is not magic"
- The "clever" (but not too clever) part of `TestModel` is that it will attempt to generate valid structured data for [function tools](agents.md#function-tools) and [result types](results.md#structured-result-validation) based on the schema of the registered tools.
+ The "clever" (but not too clever) part of `TestModel` is that it will attempt to generate valid structured data for [function tools](tools.md) and [result types](results.md#structured-result-validation) based on the schema of the registered tools.
There's no ML or AI in `TestModel`, it's just plain old procedural Python code that tries to generate data that satisfies the JSON schema of a tool.
@@ -89,7 +89,7 @@ Here we have a function that takes a list of `#!python (user_prompt, user_id)` t
Here's how we would write tests using [`TestModel`][pydantic_ai.models.test.TestModel]:
-```python {title="test_weather_app.py"}
+```python {title="test_weather_app.py" call_name="test_forecast"}
from datetime import timezone
import pytest
@@ -182,7 +182,7 @@ To fully exercise `weather_forecast`, we need to use [`FunctionModel`][pydantic_
Here's an example of using `FunctionModel` to test the `weather_forecast` tool with custom inputs
-```python {title="test_weather_app2.py"}
+```python {title="test_weather_app2.py" call_name="test_forecast_future"}
import re
import pytest
diff --git a/docs/tools.md b/docs/tools.md
new file mode 100644
index 0000000000..9ca27303e5
--- /dev/null
+++ b/docs/tools.md
@@ -0,0 +1,427 @@
+# Function Tools
+
+Function tools provide a mechanism for models to retrieve extra information to help them generate a response.
+
+They're useful when it is impractical or impossible to put all the context an agent might need into the system prompt, or when you want to make agents' behavior more deterministic or reliable by deferring some of the logic required to generate a response to another (not necessarily AI-powered) tool.
+
+!!! info "Function tools vs. RAG"
+ Function tools are basically the "R" of RAG (Retrieval-Augmented Generation) — they augment what the model can do by letting it request extra information.
+
+ The main semantic difference between PydanticAI Tools and RAG is RAG is synonymous with vector search, while PydanticAI tools are more general-purpose. (Note: we may add support for vector search functionality in the future, particularly an API for generating embeddings. See [#58](https://github.com/pydantic/pydantic-ai/issues/58))
+
+There are a number of ways to register tools with an agent:
+
+* via the [`@agent.tool`][pydantic_ai.Agent.tool] decorator — for tools that need access to the agent [context][pydantic_ai.tools.RunContext]
+* via the [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] decorator — for tools that do not need access to the agent [context][pydantic_ai.tools.RunContext]
+* via the [`tools`][pydantic_ai.Agent.__init__] keyword argument to `Agent` which can take either plain functions, or instances of [`Tool`][pydantic_ai.tools.Tool]
+
+`@agent.tool` is considered the default decorator since in the majority of cases tools will need access to the agent context.
+
+Here's an example using both:
+
+```python {title="dice_game.py"}
+import random
+
+from pydantic_ai import Agent, RunContext
+
+agent = Agent(
+ 'gemini-1.5-flash', # (1)!
+ deps_type=str, # (2)!
+ system_prompt=(
+ "You're a dice game, you should roll the die and see if the number "
+ "you get back matches the user's guess. If so, tell them they're a winner. "
+ "Use the player's name in the response."
+ ),
+)
+
+
+@agent.tool_plain # (3)!
+def roll_die() -> str:
+ """Roll a six-sided die and return the result."""
+ return str(random.randint(1, 6))
+
+
+@agent.tool # (4)!
+def get_player_name(ctx: RunContext[str]) -> str:
+ """Get the player's name."""
+ return ctx.deps
+
+
+dice_result = agent.run_sync('My guess is 4', deps='Anne') # (5)!
+print(dice_result.data)
+#> Congratulations Anne, you guessed correctly! You're a winner!
+```
+
+1. This is a pretty simple task, so we can use the fast and cheap Gemini flash model.
+2. We pass the user's name as the dependency, to keep things simple we use just the name as a string as the dependency.
+3. This tool doesn't need any context, it just returns a random number. You could probably use a dynamic system prompt in this case.
+4. This tool needs the player's name, so it uses `RunContext` to access dependencies which are just the player's name in this case.
+5. Run the agent, passing the player's name as the dependency.
+
+_(This example is complete, it can be run "as is")_
+
+Let's print the messages from that game to see what happened:
+
+```python {title="dice_game_messages.py"}
+from dice_game import dice_result
+
+print(dice_result.all_messages())
+"""
+[
+ SystemPrompt(
+ content="You're a dice game, you should roll the die and see if the number you get back matches the user's guess. If so, tell them they're a winner. Use the player's name in the response.",
+ role='system',
+ ),
+ UserPrompt(
+ content='My guess is 4',
+ timestamp=datetime.datetime(...),
+ role='user',
+ ),
+ ModelStructuredResponse(
+ calls=[
+ ToolCall(
+ tool_name='roll_die', args=ArgsDict(args_dict={}), tool_call_id=None
+ )
+ ],
+ timestamp=datetime.datetime(...),
+ role='model-structured-response',
+ ),
+ ToolReturn(
+ tool_name='roll_die',
+ content='4',
+ tool_call_id=None,
+ timestamp=datetime.datetime(...),
+ role='tool-return',
+ ),
+ ModelStructuredResponse(
+ calls=[
+ ToolCall(
+ tool_name='get_player_name',
+ args=ArgsDict(args_dict={}),
+ tool_call_id=None,
+ )
+ ],
+ timestamp=datetime.datetime(...),
+ role='model-structured-response',
+ ),
+ ToolReturn(
+ tool_name='get_player_name',
+ content='Anne',
+ tool_call_id=None,
+ timestamp=datetime.datetime(...),
+ role='tool-return',
+ ),
+ ModelTextResponse(
+ content="Congratulations Anne, you guessed correctly! You're a winner!",
+ timestamp=datetime.datetime(...),
+ role='model-text-response',
+ ),
+]
+"""
+```
+
+We can represent this with a diagram:
+
+```mermaid
+sequenceDiagram
+ participant Agent
+ participant LLM
+
+ Note over Agent: Send prompts
+ Agent ->> LLM: System: "You're a dice game..."
User: "My guess is 4"
+ activate LLM
+ Note over LLM: LLM decides to use
a tool
+
+ LLM ->> Agent: Call tool
roll_die()
+ deactivate LLM
+ activate Agent
+ Note over Agent: Rolls a six-sided die
+
+ Agent -->> LLM: ToolReturn
"4"
+ deactivate Agent
+ activate LLM
+ Note over LLM: LLM decides to use
another tool
+
+ LLM ->> Agent: Call tool
get_player_name()
+ deactivate LLM
+ activate Agent
+ Note over Agent: Retrieves player name
+ Agent -->> LLM: ToolReturn
"Anne"
+ deactivate Agent
+ activate LLM
+ Note over LLM: LLM constructs final response
+
+ LLM ->> Agent: ModelTextResponse
"Congratulations Anne, ..."
+ deactivate LLM
+ Note over Agent: Game session complete
+```
+
+## Registering Function Tools via kwarg
+
+As well as using the decorators, we can register tools via the `tools` argument to the [`Agent` constructor][pydantic_ai.Agent.__init__]. This is useful when you want to re-use tools, and can also give more fine-grained control over the tools.
+
+```python {title="dice_game_tool_kwarg.py"}
+import random
+
+from pydantic_ai import Agent, RunContext, Tool
+
+
+def roll_die() -> str:
+ """Roll a six-sided die and return the result."""
+ return str(random.randint(1, 6))
+
+
+def get_player_name(ctx: RunContext[str]) -> str:
+ """Get the player's name."""
+ return ctx.deps
+
+
+agent_a = Agent(
+ 'gemini-1.5-flash',
+ deps_type=str,
+ tools=[roll_die, get_player_name], # (1)!
+)
+agent_b = Agent(
+ 'gemini-1.5-flash',
+ deps_type=str,
+ tools=[ # (2)!
+ Tool(roll_die, takes_ctx=False),
+ Tool(get_player_name, takes_ctx=True),
+ ],
+)
+dice_result = agent_b.run_sync('My guess is 4', deps='Anne')
+print(dice_result.data)
+#> Congratulations Anne, you guessed correctly! You're a winner!
+```
+
+1. The simplest way to register tools via the `Agent` constructor is to pass a list of functions, the function signature is inspected to determine if the tool takes [`RunContext`][pydantic_ai.tools.RunContext].
+2. `agent_a` and `agent_b` are identical — but we can use [`Tool`][pydantic_ai.tools.Tool] to reuse tool definitions and give more fine-grained control over how tools are defined, e.g. setting their name or description, or using a custom [`prepare`](#tool-prepare) method.
+
+_(This example is complete, it can be run "as is")_
+
+## Function Tools vs. Structured Results
+
+As the name suggests, function tools use the model's "tools" or "functions" API to let the model know what is available to call. Tools or functions are also used to define the schema(s) for structured responses, thus a model might have access to many tools, some of which call function tools while others end the run and return a result.
+
+## Function tools and schema
+
+Function parameters are extracted from the function signature, and all parameters except `RunContext` are used to build the schema for that tool call.
+
+Even better, PydanticAI extracts the docstring from functions and (thanks to [griffe](https://mkdocstrings.github.io/griffe/)) extracts parameter descriptions from the docstring and adds them to the schema.
+
+[Griffe supports](https://mkdocstrings.github.io/griffe/reference/docstrings/#docstrings) extracting parameter descriptions from `google`, `numpy` and `sphinx` style docstrings, and PydanticAI will infer the format to use based on the docstring. We plan to add support in the future to explicitly set the style to use, and warn/error if not all parameters are documented; see [#59](https://github.com/pydantic/pydantic-ai/issues/59).
+
+To demonstrate a tool's schema, here we use [`FunctionModel`][pydantic_ai.models.function.FunctionModel] to print the schema a model would receive:
+
+```python {title="tool_schema.py"}
+from pydantic_ai import Agent
+from pydantic_ai.messages import Message, ModelAnyResponse, ModelTextResponse
+from pydantic_ai.models.function import AgentInfo, FunctionModel
+
+agent = Agent()
+
+
+@agent.tool_plain
+def foobar(a: int, b: str, c: dict[str, list[float]]) -> str:
+ """Get me foobar.
+
+ Args:
+ a: apple pie
+ b: banana cake
+ c: carrot smoothie
+ """
+ return f'{a} {b} {c}'
+
+
+def print_schema(messages: list[Message], info: AgentInfo) -> ModelAnyResponse:
+ tool = info.function_tools[0]
+ print(tool.description)
+ #> Get me foobar.
+ print(tool.parameters_json_schema)
+ """
+ {
+ 'properties': {
+ 'a': {'description': 'apple pie', 'title': 'A', 'type': 'integer'},
+ 'b': {'description': 'banana cake', 'title': 'B', 'type': 'string'},
+ 'c': {
+ 'additionalProperties': {'items': {'type': 'number'}, 'type': 'array'},
+ 'description': 'carrot smoothie',
+ 'title': 'C',
+ 'type': 'object',
+ },
+ },
+ 'required': ['a', 'b', 'c'],
+ 'type': 'object',
+ 'additionalProperties': False,
+ }
+ """
+ return ModelTextResponse(content='foobar')
+
+
+agent.run_sync('hello', model=FunctionModel(print_schema))
+```
+
+_(This example is complete, it can be run "as is")_
+
+The return type of tool can be anything which Pydantic can serialize to JSON as some models (e.g. Gemini) support semi-structured return values, some expect text (OpenAI) but seem to be just as good at extracting meaning from the data. If a Python object is returned and the model expects a string, the value will be serialized to JSON.
+
+If a tool has a single parameter that can be represented as an object in JSON schema (e.g. dataclass, TypedDict, pydantic model), the schema for the tool is simplified to be just that object.
+
+Here's an example, we use [`TestModel.agent_model_function_tools`][pydantic_ai.models.test.TestModel.agent_model_function_tools] to inspect the tool schema that would be passed to the model.
+
+```python {title="single_parameter_tool.py"}
+from pydantic import BaseModel
+
+from pydantic_ai import Agent
+from pydantic_ai.models.test import TestModel
+
+agent = Agent()
+
+
+class Foobar(BaseModel):
+ """This is a Foobar"""
+
+ x: int
+ y: str
+ z: float = 3.14
+
+
+@agent.tool_plain
+def foobar(f: Foobar) -> str:
+ return str(f)
+
+
+test_model = TestModel()
+result = agent.run_sync('hello', model=test_model)
+print(result.data)
+#> {"foobar":"x=0 y='a' z=3.14"}
+print(test_model.agent_model_function_tools)
+"""
+[
+ ToolDefinition(
+ name='foobar',
+ description='This is a Foobar',
+ parameters_json_schema={
+ 'properties': {
+ 'x': {'title': 'X', 'type': 'integer'},
+ 'y': {'title': 'Y', 'type': 'string'},
+ 'z': {'default': 3.14, 'title': 'Z', 'type': 'number'},
+ },
+ 'required': ['x', 'y'],
+ 'title': 'Foobar',
+ 'type': 'object',
+ },
+ outer_typed_dict_key=None,
+ )
+]
+"""
+```
+
+_(This example is complete, it can be run "as is")_
+
+## Dynamic Function tools {#tool-prepare}
+
+Tools can optionally be defined with another function: `prepare`, which is called at each step of a run to
+customize the definition of the tool passed to the model, or omit the tool completely from that step.
+
+A `prepare` method can be registered via the `prepare` kwarg to any of the tool registration mechanisms:
+
+* [`@agent.tool`][pydantic_ai.Agent.tool] decorator
+* [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] decorator
+* [`Tool`][pydantic_ai.tools.Tool] dataclass
+
+The `prepare` method, should be of type [`ToolPrepareFunc`][pydantic_ai.tools.ToolPrepareFunc], a function which takes [`RunContext`][pydantic_ai.tools.RunContext] and a pre-built [`ToolDefinition`][pydantic_ai.tools.ToolDefinition], and should either return that `ToolDefinition` with or without modifying it, return a new `ToolDefinition`, or return `None` to indicate this tools should not be registered for that step.
+
+Here's a simple `prepare` method that only includes the tool if the value of the dependency is `42`.
+
+As with the previous example, we use [`TestModel`][pydantic_ai.models.test.TestModel] to demonstrate the behavior without calling a real model.
+
+```python {title="tool_only_if_42.py"}
+from typing import Union
+
+from pydantic_ai import Agent, RunContext
+from pydantic_ai.tools import ToolDefinition
+
+agent = Agent('test')
+
+
+async def only_if_42(
+ ctx: RunContext[int], tool_def: ToolDefinition
+) -> Union[ToolDefinition, None]:
+ if ctx.deps == 42:
+ return tool_def
+
+
+@agent.tool(prepare=only_if_42)
+def hitchhiker(ctx: RunContext[int], answer: str) -> str:
+ return f'{ctx.deps} {answer}'
+
+
+result = agent.run_sync('testing...', deps=41)
+print(result.data)
+#> success (no tool calls)
+result = agent.run_sync('testing...', deps=42)
+print(result.data)
+#> {"hitchhiker":"42 a"}
+```
+
+_(This example is complete, it can be run "as is")_
+
+Here's a more complex example where we change the description of the `name` parameter to based on the value of `deps`
+
+For the sake of variation, we create this tool using the [`Tool`][pydantic_ai.tools.Tool] dataclass.
+
+```python {title="customize_name.py"}
+from __future__ import annotations
+
+from typing import Literal
+
+from pydantic_ai import Agent, RunContext
+from pydantic_ai.models.test import TestModel
+from pydantic_ai.tools import Tool, ToolDefinition
+
+
+def greet(name: str) -> str:
+ return f'hello {name}'
+
+
+async def prepare_greet(
+ ctx: RunContext[Literal['human', 'machine']], tool_def: ToolDefinition
+) -> ToolDefinition | None:
+ d = f'Name of the {ctx.deps} to greet.'
+ tool_def.parameters_json_schema['properties']['name']['description'] = d
+ return tool_def
+
+
+greet_tool = Tool(greet, prepare=prepare_greet)
+test_model = TestModel()
+agent = Agent(test_model, tools=[greet_tool], deps_type=Literal['human', 'machine'])
+
+result = agent.run_sync('testing...', deps='human')
+print(result.data)
+#> {"greet":"hello a"}
+print(test_model.agent_model_function_tools)
+"""
+[
+ ToolDefinition(
+ name='greet',
+ description='',
+ parameters_json_schema={
+ 'properties': {
+ 'name': {
+ 'title': 'Name',
+ 'type': 'string',
+ 'description': 'Name of the human to greet.',
+ }
+ },
+ 'required': ['name'],
+ 'type': 'object',
+ 'additionalProperties': False,
+ },
+ outer_typed_dict_key=None,
+ )
+]
+"""
+```
+
+_(This example is complete, it can be run "as is")_
diff --git a/mkdocs.yml b/mkdocs.yml
index dafa1262f6..dfe510851a 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -17,6 +17,7 @@ nav:
- Documentation:
- agents.md
- dependencies.md
+ - tools.md
- results.md
- message-history.md
- testing-evals.md
diff --git a/pydantic_ai_slim/pydantic_ai/agent.py b/pydantic_ai_slim/pydantic_ai/agent.py
index 8e4bbcfa9f..7eb81e9dd6 100644
--- a/pydantic_ai_slim/pydantic_ai/agent.py
+++ b/pydantic_ai_slim/pydantic_ai/agent.py
@@ -562,7 +562,7 @@ def tool(
Can decorate a sync or async functions.
The docstring is inspected to extract both the tool description and description of each parameter,
- [learn more](../agents.md#function-tools-and-schema).
+ [learn more](../tools.md#function-tools-and-schema).
We can't add overloads for every possible signature of tool, since the return type is a recursive union
so the signature of functions decorated with `@agent.tool` is obscured.
@@ -634,7 +634,7 @@ def tool_plain(
Can decorate a sync or async functions.
The docstring is inspected to extract both the tool description and description of each parameter,
- [learn more](../agents.md#function-tools-and-schema).
+ [learn more](../tools.md#function-tools-and-schema).
We can't add overloads for every possible signature of tool, since the return type is a recursive union
so the signature of functions decorated with `@agent.tool` is obscured.
diff --git a/pydantic_ai_slim/pydantic_ai/tools.py b/pydantic_ai_slim/pydantic_ai/tools.py
index 6f87e7cf44..c368b6c372 100644
--- a/pydantic_ai_slim/pydantic_ai/tools.py
+++ b/pydantic_ai_slim/pydantic_ai/tools.py
@@ -97,11 +97,11 @@ class RunContext(Generic[AgentDeps]):
ToolPrepareFunc: TypeAlias = 'Callable[[RunContext[AgentDeps], ToolDefinition], Awaitable[ToolDefinition | None]]'
"""Definition of a function that can prepare a tool definition at call time.
-See [tool docs](../agents.md#tool-prepare) for more information.
+See [tool docs](../tools.md#tool-prepare) for more information.
Example — here `only_if_42` is valid as a `ToolPrepareFunc`:
-```python
+```python {lint="not-imports"}
from typing import Union
from pydantic_ai import RunContext, Tool
@@ -157,7 +157,7 @@ def __init__(
Example usage:
- ```python
+ ```python {lint="not-imports"}
from pydantic_ai import Agent, RunContext, Tool
async def my_tool(ctx: RunContext[int], x: int, y: int) -> str:
@@ -168,7 +168,7 @@ async def my_tool(ctx: RunContext[int], x: int, y: int) -> str:
or with a custom prepare method:
- ```python
+ ```python {lint="not-imports"}
from typing import Union
from pydantic_ai import Agent, RunContext, Tool
diff --git a/tests/test_examples.py b/tests/test_examples.py
index db89a1eaa4..4e4bbf9dc9 100644
--- a/tests/test_examples.py
+++ b/tests/test_examples.py
@@ -97,24 +97,16 @@ def test_docs_examples(
ruff_ignore: list[str] = ['D']
# `from bank_database import DatabaseConn` wrongly sorted in imports
# waiting for https://github.com/pydantic/pytest-examples/issues/43
- if 'import DatabaseConn' in example.source:
- ruff_ignore.append('I001')
- elif 'async def my_tool(' in example.source or 'async def only_if_42(' in example.source:
- # until https://github.com/pydantic/pytest-examples/issues/46 is fixed
+ # and https://github.com/pydantic/pytest-examples/issues/46
+ if opt_lint == 'not-imports' or 'import DatabaseConn' in example.source:
ruff_ignore.append('I001')
- line_length = 88
- if opt_title in ('streamed_hello_world.py', 'streamed_user_profile.py'):
- line_length = 120
+ line_length = int(prefix_settings.get('line_length', '88'))
eval_example.set_config(ruff_ignore=ruff_ignore, target_version='py39', line_length=line_length)
eval_example.print_callback = print_callback
- call_name = 'main'
- for name in ('test_application_code', 'test_forecast', 'test_forecast_future'):
- if f'def {name}():' in example.source:
- call_name = name
- break
+ call_name = prefix_settings.get('call_name', 'main')
if not opt_lint.startswith('skip'):
if eval_example.update_examples: # pragma: no cover