Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "2.2.0"
".": "2.3.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 136
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-d64cf80d2ebddf175c5578f68226a3d5bbd3f7fd8d62ccac2205f3fc05a355ee.yml
openapi_spec_hash: d51e0d60d0c536f210b597a211bc5af0
config_hash: e7c42016df9c6bd7bd6ff15101b9bc9b
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-e66e85fb7f72477256dca1acb6b23396989d381c5c1b318de564195436bcb93f.yml
openapi_spec_hash: 0a4bbb5aa0ae532a072bd6b3854e70b1
config_hash: 89bf7bb3a1f9439ffc6ea0e7dc57ba9b
13 changes: 13 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,18 @@
# Changelog

## 2.3.0 (2025-10-10)

Full Changelog: [v2.2.0...v2.3.0](https://github.com/openai/openai-python/compare/v2.2.0...v2.3.0)

### Features

* **api:** comparison filter in/not in ([aa49f62](https://github.com/openai/openai-python/commit/aa49f626a6ea9d77ad008badfb3741e16232d62f))


### Chores

* **package:** bump jiter to >=0.10.0 to support Python 3.14 ([#2618](https://github.com/openai/openai-python/issues/2618)) ([aa445ca](https://github.com/openai/openai-python/commit/aa445cab5c93c6908697fe98e73e16963330b141))

## 2.2.0 (2025-10-06)

Full Changelog: [v2.1.0...v2.2.0](https://github.com/openai/openai-python/compare/v2.1.0...v2.2.0)
Expand Down
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "2.2.0"
version = "2.3.0"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand All @@ -15,7 +15,7 @@ dependencies = [
"distro>=1.7.0, <2",
"sniffio",
"tqdm > 4",
"jiter>=0.4.0, <1",
"jiter>=0.10.0, <1",
]
requires-python = ">= 3.8"
classifiers = [
Expand Down
2 changes: 1 addition & 1 deletion requirements-dev.lock
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ importlib-metadata==7.0.0
iniconfig==2.0.0
# via pytest
inline-snapshot==0.28.0
jiter==0.5.0
jiter==0.11.0
# via openai
markdown-it-py==3.0.0
# via rich
Expand Down
2 changes: 1 addition & 1 deletion requirements.lock
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ idna==3.4
# via anyio
# via httpx
# via yarl
jiter==0.6.1
jiter==0.11.0
# via openai
multidict==6.5.0
# via aiohttp
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "2.2.0" # x-release-please-version
__version__ = "2.3.0" # x-release-please-version
12 changes: 12 additions & 0 deletions src/openai/resources/beta/assistants.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,9 @@ def create(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: Specifies the format that the model must output. Compatible with
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
[GPT-4 Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4),
Expand Down Expand Up @@ -309,6 +312,9 @@ def update(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: Specifies the format that the model must output. Compatible with
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
[GPT-4 Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4),
Expand Down Expand Up @@ -555,6 +561,9 @@ async def create(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: Specifies the format that the model must output. Compatible with
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
[GPT-4 Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4),
Expand Down Expand Up @@ -762,6 +771,9 @@ async def update(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: Specifies the format that the model must output. Compatible with
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
[GPT-4 Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4),
Expand Down
18 changes: 18 additions & 0 deletions src/openai/resources/beta/threads/runs/runs.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,6 +173,9 @@ def create(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: Specifies the format that the model must output. Compatible with
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
[GPT-4 Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4),
Expand Down Expand Up @@ -327,6 +330,9 @@ def create(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: Specifies the format that the model must output. Compatible with
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
[GPT-4 Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4),
Expand Down Expand Up @@ -477,6 +483,9 @@ def create(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: Specifies the format that the model must output. Compatible with
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
[GPT-4 Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4),
Expand Down Expand Up @@ -1603,6 +1612,9 @@ async def create(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: Specifies the format that the model must output. Compatible with
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
[GPT-4 Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4),
Expand Down Expand Up @@ -1757,6 +1769,9 @@ async def create(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: Specifies the format that the model must output. Compatible with
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
[GPT-4 Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4),
Expand Down Expand Up @@ -1907,6 +1922,9 @@ async def create(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: Specifies the format that the model must output. Compatible with
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
[GPT-4 Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4),
Expand Down
18 changes: 18 additions & 0 deletions src/openai/resources/chat/completions/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -407,6 +407,9 @@ def create(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
Expand Down Expand Up @@ -704,6 +707,9 @@ def create(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
Expand Down Expand Up @@ -992,6 +998,9 @@ def create(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
Expand Down Expand Up @@ -1845,6 +1854,9 @@ async def create(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
Expand Down Expand Up @@ -2142,6 +2154,9 @@ async def create(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
Expand Down Expand Up @@ -2430,6 +2445,9 @@ async def create(
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.

response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
Expand Down
4 changes: 2 additions & 2 deletions src/openai/resources/files.py
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ def delete(
timeout: float | httpx.Timeout | None | NotGiven = not_given,
) -> FileDeleted:
"""
Delete a file.
Delete a file and remove it from all vector stores.
Args:
extra_headers: Send extra headers
Expand Down Expand Up @@ -553,7 +553,7 @@ async def delete(
timeout: float | httpx.Timeout | None | NotGiven = not_given,
) -> FileDeleted:
"""
Delete a file.
Delete a file and remove it from all vector stores.
Args:
extra_headers: Send extra headers
Expand Down
3 changes: 3 additions & 0 deletions src/openai/types/beta/assistant_create_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,9 @@ class AssistantCreateParams(TypedDict, total=False):
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
"""

response_format: Optional[AssistantResponseFormatOptionParam]
Expand Down
3 changes: 3 additions & 0 deletions src/openai/types/beta/assistant_update_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,9 @@ class AssistantUpdateParams(TypedDict, total=False):
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
"""

response_format: Optional[AssistantResponseFormatOptionParam]
Expand Down
3 changes: 3 additions & 0 deletions src/openai/types/beta/threads/run_create_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,9 @@ class RunCreateParamsBase(TypedDict, total=False):
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
"""

response_format: Optional[AssistantResponseFormatOptionParam]
Expand Down
3 changes: 3 additions & 0 deletions src/openai/types/chat/completion_create_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,9 @@ class CompletionCreateParamsBase(TypedDict, total=False):
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
"""

response_format: ResponseFormat
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -175,6 +175,9 @@ class SamplingParams(BaseModel):
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
"""

response_format: Optional[SamplingParamsResponseFormat] = None
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -171,6 +171,9 @@ class SamplingParams(TypedDict, total=False):
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
"""

response_format: SamplingParamsResponseFormat
Expand Down
6 changes: 6 additions & 0 deletions src/openai/types/evals/run_cancel_response.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,9 @@ class DataSourceResponsesSourceResponses(BaseModel):
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
"""

temperature: Optional[float] = None
Expand Down Expand Up @@ -241,6 +244,9 @@ class DataSourceResponsesSamplingParams(BaseModel):
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
"""

seed: Optional[int] = None
Expand Down
6 changes: 6 additions & 0 deletions src/openai/types/evals/run_create_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,9 @@ class DataSourceCreateEvalResponsesRunDataSourceSourceResponses(TypedDict, total
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
"""

temperature: Optional[float]
Expand Down Expand Up @@ -259,6 +262,9 @@ class DataSourceCreateEvalResponsesRunDataSourceSamplingParams(TypedDict, total=
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
"""

seed: int
Expand Down
6 changes: 6 additions & 0 deletions src/openai/types/evals/run_create_response.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,9 @@ class DataSourceResponsesSourceResponses(BaseModel):
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
"""

temperature: Optional[float] = None
Expand Down Expand Up @@ -241,6 +244,9 @@ class DataSourceResponsesSamplingParams(BaseModel):
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.

Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
"""

seed: Optional[int] = None
Expand Down
6 changes: 6 additions & 0 deletions src/openai/types/evals/run_list_response.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,9 @@ class DataSourceResponsesSourceResponses(BaseModel):
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
"""

temperature: Optional[float] = None
Expand Down Expand Up @@ -241,6 +244,9 @@ class DataSourceResponsesSamplingParams(BaseModel):
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
"""

seed: Optional[int] = None
Expand Down
Loading