You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+15Lines changed: 15 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,20 @@
1
1
# Changelog
2
2
3
+
## 0.7.0 (2025-06-09)
4
+
5
+
Full Changelog: [v0.6.0...v0.7.0](https://github.com/openai/openai-ruby/compare/v0.6.0...v0.7.0)
6
+
7
+
### Features
8
+
9
+
***api:** Add tools and structured outputs to evals ([6ee3392](https://github.com/openai/openai-ruby/commit/6ee33924e9146e2450e9c43d052886ed3214cbde))
10
+
11
+
12
+
### Bug Fixes
13
+
14
+
* default content-type for text in multi-part formdata uploads should be text/plain ([105cf47](https://github.com/openai/openai-ruby/commit/105cf4717993c744ee6c453d2a99ae03f51035d4))
15
+
* tool parameter mapping for chat completions ([#156](https://github.com/openai/openai-ruby/issues/156)) ([5999b9f](https://github.com/openai/openai-ruby/commit/5999b9f6ad6dc73a290a8ef7b1b52bd89897039c))
16
+
* tool parameter mapping for responses ([#704](https://github.com/openai/openai-ruby/issues/704)) ([ac8bf11](https://github.com/openai/openai-ruby/commit/ac8bf11cf59fcc778f1658429a1fc06eaca79bba))
17
+
3
18
## 0.6.0 (2025-06-03)
4
19
5
20
Full Changelog: [v0.5.1...v0.6.0](https://github.com/openai/openai-ruby/compare/v0.5.1...v0.6.0)
# Some parameter documentations has been truncated, see
481
+
# {OpenAI::Models::Evals::CreateEvalCompletionsRunDataSource::SamplingParams} for
482
+
# more details.
483
+
#
454
484
# @param max_completion_tokens [Integer] The maximum number of tokens in the generated output.
455
485
#
486
+
# @param response_format [OpenAI::Models::ResponseFormatText, OpenAI::Models::ResponseFormatJSONSchema, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
487
+
#
456
488
# @param seed [Integer] A seed value to initialize the randomness, during sampling.
457
489
#
458
490
# @param temperature [Float] A higher temperature increases randomness in the outputs.
459
491
#
492
+
# @param tools [Array<OpenAI::Models::Chat::ChatCompletionTool>] A list of tools the model may call. Currently, only functions are supported as a
493
+
#
460
494
# @param top_p [Float] An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
495
+
496
+
# An object specifying the format that the model must output.
# @param max_completion_tokens [Integer] The maximum number of tokens in the generated output.
627
661
#
628
662
# @param seed [Integer] A seed value to initialize the randomness, during sampling.
629
663
#
630
664
# @param temperature [Float] A higher temperature increases randomness in the outputs.
631
665
#
666
+
# @param text [OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams::Text] Configuration options for a text response from the model. Can be plain
667
+
#
668
+
# @param tools [Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>] An array of tools the model may call while generating a response. You
669
+
#
632
670
# @param top_p [Float] An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
# @param format_ [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
0 commit comments