-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Closed as not planned
Closed as not planned
Copy link
Labels
Description
Confirm this is an issue with the Python library and not an underlying OpenAI API
- This is an issue with the Python library
Describe the bug
In the new open_ai Python library, it is no longer possible to specify a byte array type for the image argument of the OpenAI().images.edit method.
From line 45 of venv/lib/python3.12/site-packages/openai/_types.py, it seems that the bytes type must also be allowed.
- Here is the type definition for the image argument:
# Approximates httpx internal ProxiesTypes and RequestFiles types
# while adding support for `PathLike` instances
ProxiesDict = Dict["str | URL", Union[None, str, URL, Proxy]]
ProxiesTypes = Union[str, Proxy, ProxiesDict]
if TYPE_CHECKING:
Base64FileInput = Union[IO[bytes], PathLike[str]]
FileContent = Union[IO[bytes], bytes, PathLike[str]]
else:
Base64FileInput = Union[IO[bytes], PathLike]
FileContent = Union[IO[bytes], bytes, PathLike] # PathLike is not subscriptable in Python 3.8.
FileTypes = Union[
# file (or bytes)
FileContent,
# (filename, file (or bytes))
Tuple[Optional[str], FileContent],
# (filename, file (or bytes), content_type)
Tuple[Optional[str], FileContent, Optional[str]],
# (filename, file (or bytes), content_type, headers)
Tuple[Optional[str], FileContent, Optional[str], Mapping[str, str]],
]
RequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]
To Reproduce
- Code that no longer works
import openai
print('Version:', openai.__version__)
import base64
from openai import OpenAI
data: bytes = None
with open(file='unit_test/data_in/black_cat_rgba.png', mode='rb') as file:
data = file.read()
result = OpenAI().images.edit(
image=data, # <---Image data of type bytes is being passed.
prompt='''
Generate a photorealistic image of a gift basket on a white background
labeled 'Relax & Unwind' with a ribbon and handwriting-like font,
containing all the items in the reference pictures.
''',
model='gpt-image-1'
)
# Save the image to a file
image_base64 = result.data[0].b64_json
image_bytes = base64.b64decode(image_base64)
with open('unit_test/data_out/gift-basket.png', 'wb') as file:
file.write(image_bytes)
- Execution result
Version: 1.77.0
---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
Cell In[13], line 11
8 with open(file='unit_test/data_in/black_cat_rgba.png', mode='rb') as file:
9 data = file.read()
---> 11 result = OpenAI().images.edit(
12 image=data,
13 prompt='''
14 Generate a photorealistic image of a gift basket on a white background
15 labeled 'Relax & Unwind' with a ribbon and handwriting-like font,
16 containing all the items in the reference pictures.
17 ''',
18 model='gpt-image-1'
19 )
21 # Save the image to a file
22 image_base64 = result.data[0].b64_json
File ~/workspace/orcas_proj/venv/lib/python3.12/site-packages/openai/resources/images.py:218, in Images.edit(self, image, prompt, background, mask, model, n, quality, response_format, size, user, extra_headers, extra_query, extra_body, timeout)
214 # It should be noted that the actual Content-Type header that will be
215 # sent to the server will contain a `boundary` parameter, e.g.
216 # multipart/form-data; boundary=---abc--
217 extra_headers = {"Content-Type": "multipart/form-data", **(extra_headers or {})}
--> 218 return self._post(
219 "/images/edits",
220 body=maybe_transform(body, image_edit_params.ImageEditParams),
221 files=files,
222 options=make_request_options(
223 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
224 ),
225 cast_to=ImagesResponse,
226 )
File ~/workspace/orcas_proj/venv/lib/python3.12/site-packages/openai/_base_client.py:1239, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1225 def post(
1226 self,
1227 path: str,
(...)
1234 stream_cls: type[_StreamT] | None = None,
1235 ) -> ResponseT | _StreamT:
1236 opts = FinalRequestOptions.construct(
1237 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1238 )
-> 1239 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~/workspace/orcas_proj/venv/lib/python3.12/site-packages/openai/_base_client.py:1034, in SyncAPIClient.request(self, cast_to, options, stream, stream_cls)
1031 err.response.read()
1033 log.debug("Re-raising status error")
-> 1034 raise self._make_status_error_from_response(err.response) from None
1036 break
1038 assert response is not None, "could not resolve response (should never happen)"
BadRequestError: Error code: 400 - {'error': {'message': "Invalid file 'image': unsupported mimetype ('application/octet-stream'). Supported file formats are 'image/jpeg', 'image/png', and 'image/webp'.", 'type': 'invalid_request_error', 'param': 'image', 'code': 'unsupported_file_mimetype'}}
Code snippets
# If you specify data of type io.BufferedReader for the image argument, it will work correctly.
import base64
from openai import OpenAI
result = OpenAI().images.edit(
image=open(file='unit_test/data_in/black_cat_rgba.png', mode='rb'),
prompt='''
Generate a photorealistic image of a gift basket on a white background
labeled 'Relax & Unwind' with a ribbon and handwriting-like font,
containing all the items in the reference pictures.
''',
model='gpt-image-1'
)
# Save the image to a file
image_base64 = result.data[0].b64_json
image_bytes = base64.b64decode(image_base64)
with open('unit_test/data_out/gift-basket.png', 'wb') as file:
file.write(image_bytes)
OS
Ubuntu
Python version
Python v3.12.3
Library version
openai v1.77.0