Output
"Output" refers to the final value returned from running an agent. This can be either plain text, structured data, or the result of a function called with arguments provided by the model.
The output is wrapped in AgentRunResult
or StreamedRunResult
so that you can access other data, like usage of the run and message history.
Both AgentRunResult
and StreamedRunResult
are generic in the data they wrap, so typing information about the data returned by the agent is preserved.
A run ends when the model responds with one of the structured output types, or, if no output type is specified or str
is one of the allowed options, when a plain text response is received. A run can also be cancelled if usage limits are exceeded, see Usage Limits.
Here's an example using a Pydantic model as the output_type
, forcing the model to respond with data matching our specification:
from pydantic import BaseModel
from pydantic_ai import Agent
class CityLocation(BaseModel):
city: str
country: str
agent = Agent('google-gla:gemini-1.5-flash', output_type=CityLocation)
result = agent.run_sync('Where were the olympics held in 2012?')
print(result.output)
#> city='London' country='United Kingdom'
print(result.usage())
#> Usage(requests=1, request_tokens=57, response_tokens=8, total_tokens=65)
(This example is complete, it can be run "as is")
Output data
The Agent
class constructor takes an output_type
argument that takes one or more types or output functions. It supports simple scalar types, list and dict types, dataclasses and Pydantic models, as well as type unions -- generally everything supported as type hints in a Pydantic model. You can also pass a list of multiple choices.
By default, Pydantic AI leverages the model's tool calling capability to make it return structured data. When multiple output types are specified (in a union or list), each member is registered with the model as a separate output tool in order to reduce the complexity of the schema and maximise the chances a model will respond correctly. This has been shown to work well across a wide range of models. If you'd like to change the names of the output tools, use a model's native structured output feature, or pass the output schema to the model in its instructions, you can use an output mode marker class.
When no output type is specified, or when str
is among the output types, any plain text response from the model will be used as the output data.
If str
is not among the output types, the model is forced to return structured data or call an output function.
If the output type schema is not of type "object"
(e.g. it's int
or list[int]
), the output type is wrapped in a single element object, so the schema of all tools registered with the model are object schemas.
Structured outputs (like tools) use Pydantic to build the JSON schema used for the tool, and to validate the data returned by the model.
Type checking considerations
The Agent class is generic in its output type, and this type is carried through to AgentRunResult.output
and StreamedRunResult.output
so that your IDE or static type checker can warn you when your code doesn't properly take into account all the possible values those outputs could have.
Static type checkers like pyright and mypy will do their best the infer the agent's output type from the output_type
you've specified, but they're not always able to do so correctly when you provide functions or multiple types in a union or list, even though PydanticAI will behave correctly. When this happens, your type checker will complain even when you're confident you've passed a valid output_type
, and you'll need to help the type checker by explicitly specifying the generic parameters on the Agent
constructor. This is shown in the second example below and the output functions example further down.
Specifically, there are three valid uses of output_type
where you'll need to do this:
- When using a union of types, e.g.
output_type=Foo | Bar
, or in older Python,output_type=Union[Foo, Bar]
. Until PEP-747 "Annotating Type Forms" lands in Python 3.15, type checkers do not consider these a valid value foroutput_type
. In addition to the generic parameters on theAgent
constructor, you'll need to add# type: ignore
to the line that passes the union tooutput_type
. Alternatively, you can use a list:output_type=[Foo, Bar]
. - With mypy: When using a list, as a functionally equivalent alternative to a union, or because you're passing in output functions. Pyright does handle this correctly, and we've filed an issue with mypy to try and get this fixed.
- With mypy: when using an async output function. Pyright does handle this correctly, and we've filed an issue with mypy to try and get this fixed.
Here's an example of returning either text or structured data:
from pydantic import BaseModel
from pydantic_ai import Agent
class Box(BaseModel):
width: int
height: int
depth: int
units: str
agent = Agent(
'openai:gpt-4o-mini',
output_type=[Box, str], # (1)!
system_prompt=(
"Extract me the dimensions of a box, "
"if you can't extract all data, ask the user to try again."
),
)
result = agent.run_sync('The box is 10x20x30')
print(result.output)
#> Please provide the units for the dimensions (e.g., cm, in, m).
result = agent.run_sync('The box is 10x20x30 cm')
print(result.output)
#> width=10 height=20 depth=30 units='cm'
- This could also have been a union:
output_type=Box | str
(or in older Python,output_type=Union[Box, str]
). However, as explained in the "Type checking considerations" section above, that would've required explicitly specifying the generic parameters on theAgent
constructor and adding# type: ignore
to this line in order to be type checked correctly.
(This example is complete, it can be run "as is")
Here's an example of using a union return type, which will register multiple output tools and wrap non-object schemas in an object:
from typing import Union
from pydantic_ai import Agent
agent = Agent[None, Union[list[str], list[int]]](
'openai:gpt-4o-mini',
output_type=Union[list[str], list[int]], # type: ignore # (1)!
system_prompt='Extract either colors or sizes from the shapes provided.',
)
result = agent.run_sync('red square, blue circle, green triangle')
print(result.output)
#> ['red', 'blue', 'green']
result = agent.run_sync('square size 10, circle size 20, triangle size 30')
print(result.output)
#> [10, 20, 30]
- As explained in the "Type checking considerations" section above, using a union rather than a list requires explicitly specifying the generic parameters on the
Agent
constructor and adding# type: ignore
to this line in order to be type checked correctly.
(This example is complete, it can be run "as is")
Output functions
Instead of plain text or structured data, you may want the output of your agent run to be the result of a function called with arguments provided by the model, for example to further process or validate the data provided through the arguments (with the option to tell the model to try again), or to hand off to another agent.
Output functions are similar to function tools, but the model is forced to call one of them, the call ends the agent run, and the result is not passed back to the model.
As with tool functions, output function arguments provided by the model are validated using Pydantic, they can optionally take RunContext
as the first argument, and they can raise ModelRetry
to ask the model to try again with modified arguments (or with a different output type).
To specify output functions, you set the agent's output_type
to either a single function (or bound instance method), or a list of functions. The list can also contain other output types like simple scalars or entire Pydantic models.
You typically do not want to also register your output function as a tool (using the @agent.tool
decorator or tools
argument), as this could confuse the model about which it should be calling.
Here's an example of all of these features in action:
import re
from typing import Union
from pydantic import BaseModel
from pydantic_ai import Agent, ModelRetry, RunContext
from pydantic_ai.exceptions import UnexpectedModelBehavior
class Row(BaseModel):
name: str
country: str
tables = {
'capital_cities': [
Row(name='Amsterdam', country='Netherlands'),
Row(name='Mexico City', country='Mexico'),
]
}
class SQLFailure(BaseModel):
"""An unrecoverable failure. Only use this when you can't change the query to make it work."""
explanation: str
def run_sql_query(query: str) -> list[Row]:
"""Run a SQL query on the database."""
select_table = re.match(r'SELECT (.+) FROM (\w+)', query)
if select_table:
column_names = select_table.group(1)
if column_names != '*':
raise ModelRetry("Only 'SELECT *' is supported, you'll have to do column filtering manually.")
table_name = select_table.group(2)
if table_name not in tables:
raise ModelRetry(
f"Unknown table '{table_name}' in query '{query}'. Available tables: {', '.join(tables.keys())}."
)
return tables[table_name]
raise ModelRetry(f"Unsupported query: '{query}'.")
sql_agent = Agent[None, Union[list[Row], SQLFailure]](
'openai:gpt-4o',
output_type=[run_sql_query, SQLFailure],
instructions='You are a SQL agent that can run SQL queries on a database.',
)
async def hand_off_to_sql_agent(ctx: RunContext, query: str) -> list[Row]:
"""I take natural language queries, turn them into SQL, and run them on a database."""
# Drop the final message with the output tool call, as it shouldn't be passed on to the SQL agent
messages = ctx.messages[:-1]
try:
result = await sql_agent.run(query, message_history=messages)
output = result.output
if isinstance(output, SQLFailure):
raise ModelRetry(f'SQL agent failed: {output.explanation}')
return output
except UnexpectedModelBehavior as e:
# Bubble up potentially retryable errors to the router agent
if (cause := e.__cause__) and hasattr(cause, 'tool_retry'):
raise ModelRetry(f'SQL agent failed: {cause.tool_retry.content}') from e
else:
raise
class RouterFailure(BaseModel):
"""Use me when no appropriate agent is found or the used agent failed."""
explanation: str
router_agent = Agent[None, Union[list[Row], RouterFailure]](
'openai:gpt-4o',
output_type=[hand_off_to_sql_agent, RouterFailure],
instructions='You are a router to other agents. Never try to solve a problem yourself, just pass it on.',
)
result = router_agent.run_sync('Select the names and countries of all capitals')
print(result.output)
"""
[
Row(name='Amsterdam', country='Netherlands'),
Row(name='Mexico City', country='Mexico'),
]
"""
result = router_agent.run_sync('Select all pets')
print(repr(result.output))
"""
RouterFailure(explanation="The requested table 'pets' does not exist in the database. The only available table is 'capital_cities', which does not contain data about pets.")
"""
result = router_agent.run_sync('How do I fly from Amsterdam to Mexico City?')
print(repr(result.output))
"""
RouterFailure(explanation='I am not equipped to provide travel information, such as flights from Amsterdam to Mexico City.')
"""
Text output
If you provide an output function that takes a string, Pydantic AI will by default create an output tool like for any other output function. If instead you'd like the model to provide the string using plain text output, you can wrap the function in the TextOutput
marker class. If desired, this marker class can be used alongside one or more ToolOutput
marker classes (or unmarked types or functions) in a list provided to output_type
.
from pydantic_ai import Agent, TextOutput
def split_into_words(text: str) -> list[str]:
return text.split()
agent = Agent(
'openai:gpt-4o',
output_type=TextOutput(split_into_words),
)
result = agent.run_sync('Who was Albert Einstein?')
print(result.output)
#> ['Albert', 'Einstein', 'was', 'a', 'German-born', 'theoretical', 'physicist.']
(This example is complete, it can be run "as is")
Output modes
Pydantic AI implements three different methods to get a model to output structured data:
- Tool Output, where tool calls are used to produce the output.
- Native Output, where the model is required to produce text content compliant with a provided JSON schema.
- Prompted Output, where a prompt is injected into the model instructions including the desired JSON schema, and we attempt to parse the model's plain-text response as appropriate.
Tool Output
In the default Tool Output mode, the output JSON schema of each output type (or function) is provided to the model as the parameters schema of a special output tool. This is the default as it's supported by virtually all models and has been shown to work very well.
If you'd like to change the name of the output tool, pass a custom description to aid the model, or turn on or off strict mode, you can wrap the type(s) in the ToolOutput
marker class and provide the appropriate arguments. Note that by default, the description is taken from the docstring specified on a Pydantic model or output function, so specifying it using the marker class is typically not necessary.
from pydantic import BaseModel
from pydantic_ai import Agent, ToolOutput
class Fruit(BaseModel):
name: str
color: str
class Vehicle(BaseModel):
name: str
wheels: int
agent = Agent(
'openai:gpt-4o',
output_type=[ # (1)!
ToolOutput(Fruit, name='return_fruit'),
ToolOutput(Vehicle, name='return_vehicle'),
],
)
result = agent.run_sync('What is a banana?')
print(repr(result.output))
#> Fruit(name='banana', color='yellow')
- If we were passing just
Fruit
andVehicle
without custom tool names, we could have used a union:output_type=Fruit | Vehicle
(or in older Python,output_type=Union[Fruit | Vehicle]
). However, asToolOutput
is an object rather than a type, we have to use a list.
(This example is complete, it can be run "as is")
Native Output
Native Output mode uses a model's native "Structured Outputs" feature (aka "JSON Schema response format"), where the model is forced to only output text matching the provided JSON schema. Note that this is not supported by all models, and sometimes comes with restrictions. For example, Anthropic does not support this at all, and Gemini cannot use tools at the same time as structured output, and attempting to do so will result in an error.
To use this mode, you can wrap the output type(s) in the NativeOutput
marker class that also lets you specify a name
and description
if the name and docstring of the type or function are not sufficient.
from tool_output import Fruit, Vehicle
from pydantic_ai import Agent, NativeOutput
agent = Agent(
'openai:gpt-4o',
output_type=NativeOutput(
[Fruit, Vehicle], # (1)!
name='Fruit or vehicle',
description='Return a fruit or vehicle.'
),
)
result = agent.run_sync('What is a Ford Explorer?')
print(repr(result.output))
#> Vehicle(name='Ford Explorer', wheels=4)
- This could also have been a union:
output_type=Fruit | Vehicle
(or in older Python,output_type=Union[Fruit, Vehicle]
). However, as explained in the "Type checking considerations" section above, that would've required explicitly specifying the generic parameters on theAgent
constructor and adding# type: ignore
to this line in order to be type checked correctly.
(This example is complete, it can be run "as is")
Prompted Output
In this mode, the model is prompted to output text matching the provided JSON schema through its instructions and it's up to the model to interpret those instructions correctly. This is usable with all models, but is often the least reliable approach as the model is not forced to match the schema.
While we would generally suggest starting with tool or native output, in some cases this mode may result in higher quality outputs, and for models without native tool calling or structured output support it is the only option for producing structured outputs.
If the model API supports the "JSON Mode" feature (aka "JSON Object response format") to force the model to output valid JSON, this is enabled, but it's still up to the model to abide by the schema. Pydantic AI will validate the returned structured data and tell the model to try again if validation fails, but if the model is not intelligent enough this may not be sufficient.
To use this mode, you can wrap the output type(s) in the PromptedOutput
marker class that also lets you specify a name
and description
if the name and docstring of the type or function are not sufficient. Additionally, it supports an template
argument lets you specify a custom instructions template to be used instead of the default.
from pydantic import BaseModel
from tool_output import Vehicle
from pydantic_ai import Agent, PromptedOutput
class Device(BaseModel):
name: str
kind: str
agent = Agent(
'openai:gpt-4o',
output_type=PromptedOutput(
[Vehicle, Device], # (1)!
name='Vehicle or device',
description='Return a vehicle or device.'
),
)
result = agent.run_sync('What is a MacBook?')
print(repr(result.output))
#> Device(name='MacBook', kind='laptop')
agent = Agent(
'openai:gpt-4o',
output_type=PromptedOutput(
[Vehicle, Device],
template='Gimme some JSON: {schema}'
),
)
result = agent.run_sync('What is a Ford Explorer?')
print(repr(result.output))
#> Vehicle(name='Ford Explorer', wheels=4)
- This could also have been a union:
output_type=Vehicle | Device
(or in older Python,output_type=Union[Vehicle, Device]
). However, as explained in the "Type checking considerations" section above, that would've required explicitly specifying the generic parameters on theAgent
constructor and adding# type: ignore
to this line in order to be type checked correctly.
(This example is complete, it can be run "as is")
Output validators
Some validation is inconvenient or impossible to do in Pydantic validators, in particular when the validation requires IO and is asynchronous. PydanticAI provides a way to add validation functions via the agent.output_validator
decorator.
If you want to implement separate validation logic for different output types, it's recommended to use output functions instead, to save you from having to do isinstance
checks inside the output validator.
If you want the model to output plain text, do your own processing or validation, and then have the agent's final output be the result of your function, it's recommended to use an output function with the TextOutput
marker class.
Here's a simplified variant of the SQL Generation example:
from typing import Union
from fake_database import DatabaseConn, QueryError
from pydantic import BaseModel
from pydantic_ai import Agent, RunContext, ModelRetry
class Success(BaseModel):
sql_query: str
class InvalidRequest(BaseModel):
error_message: str
Output = Union[Success, InvalidRequest]
agent = Agent[DatabaseConn, Output](
'google-gla:gemini-1.5-flash',
output_type=Output, # type: ignore
deps_type=DatabaseConn,
system_prompt='Generate PostgreSQL flavored SQL queries based on user input.',
)
@agent.output_validator
async def validate_sql(ctx: RunContext[DatabaseConn], output: Output) -> Output:
if isinstance(output, InvalidRequest):
return output
try:
await ctx.deps.execute(f'EXPLAIN {output.sql_query}')
except QueryError as e:
raise ModelRetry(f'Invalid query: {e}') from e
else:
return output
result = agent.run_sync(
'get me users who were last active yesterday.', deps=DatabaseConn()
)
print(result.output)
#> sql_query='SELECT * FROM users WHERE last_active::date = today() - interval 1 day'
(This example is complete, it can be run "as is")
Streamed Results
There two main challenges with streamed results:
- Validating structured responses before they're complete, this is achieved by "partial validation" which was recently added to Pydantic in pydantic/pydantic#10748.
- When receiving a response, we don't know if it's the final response without starting to stream it and peeking at the content. PydanticAI streams just enough of the response to sniff out if it's a tool call or an output, then streams the whole thing and calls tools, or returns the stream as a
StreamedRunResult
.
Streaming Text
Example of streamed text output:
from pydantic_ai import Agent
agent = Agent('google-gla:gemini-1.5-flash') # (1)!
async def main():
async with agent.run_stream('Where does "hello world" come from?') as result: # (2)!
async for message in result.stream_text(): # (3)!
print(message)
#> The first known
#> The first known use of "hello,
#> The first known use of "hello, world" was in
#> The first known use of "hello, world" was in a 1974 textbook
#> The first known use of "hello, world" was in a 1974 textbook about the C
#> The first known use of "hello, world" was in a 1974 textbook about the C programming language.
- Streaming works with the standard
Agent
class, and doesn't require any special setup, just a model that supports streaming (currently all models support streaming). - The
Agent.run_stream()
method is used to start a streamed run, this method returns a context manager so the connection can be closed when the stream completes. - Each item yield by
StreamedRunResult.stream_text()
is the complete text response, extended as new data is received.
(This example is complete, it can be run "as is" — you'll need to add asyncio.run(main())
to run main
)
We can also stream text as deltas rather than the entire text in each item:
from pydantic_ai import Agent
agent = Agent('google-gla:gemini-1.5-flash')
async def main():
async with agent.run_stream('Where does "hello world" come from?') as result:
async for message in result.stream_text(delta=True): # (1)!
print(message)
#> The first known
#> use of "hello,
#> world" was in
#> a 1974 textbook
#> about the C
#> programming language.
stream_text
will error if the response is not text.
(This example is complete, it can be run "as is" — you'll need to add asyncio.run(main())
to run main
)
Output message not included in messages
The final output message will NOT be added to result messages if you use .stream_text(delta=True)
,
see Messages and chat history for more information.
Streaming Structured Output
Not all types are supported with partial validation in Pydantic, see pydantic/pydantic#10748, generally for model-like structures it's currently best to use TypeDict
.
Here's an example of streaming a use profile as it's built:
from datetime import date
from typing_extensions import TypedDict
from pydantic_ai import Agent
class UserProfile(TypedDict, total=False):
name: str
dob: date
bio: str
agent = Agent(
'openai:gpt-4o',
output_type=UserProfile,
system_prompt='Extract a user profile from the input',
)
async def main():
user_input = 'My name is Ben, I was born on January 28th 1990, I like the chain the dog and the pyramid.'
async with agent.run_stream(user_input) as result:
async for profile in result.stream():
print(profile)
#> {'name': 'Ben'}
#> {'name': 'Ben'}
#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes'}
#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the '}
#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the dog and the pyr'}
#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the dog and the pyramid'}
#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the dog and the pyramid'}
(This example is complete, it can be run "as is" — you'll need to add asyncio.run(main())
to run main
)
If you want fine-grained control of validation, particularly catching validation errors, you can use the following pattern:
from datetime import date
from pydantic import ValidationError
from typing_extensions import TypedDict
from pydantic_ai import Agent
class UserProfile(TypedDict, total=False):
name: str
dob: date
bio: str
agent = Agent('openai:gpt-4o', output_type=UserProfile)
async def main():
user_input = 'My name is Ben, I was born on January 28th 1990, I like the chain the dog and the pyramid.'
async with agent.run_stream(user_input) as result:
async for message, last in result.stream_structured(debounce_by=0.01): # (1)!
try:
profile = await result.validate_structured_output( # (2)!
message,
allow_partial=not last,
)
except ValidationError:
continue
print(profile)
#> {'name': 'Ben'}
#> {'name': 'Ben'}
#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes'}
#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the '}
#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the dog and the pyr'}
#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the dog and the pyramid'}
#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the dog and the pyramid'}
stream_structured
streams the data asModelResponse
objects, thus iteration can't fail with aValidationError
.validate_structured_output
validates the data,allow_partial=True
enables pydantic'sexperimental_allow_partial
flag onTypeAdapter
.
(This example is complete, it can be run "as is" — you'll need to add asyncio.run(main())
to run main
)
Examples
The following examples demonstrate how to use streamed responses in PydanticAI: