-
Notifications
You must be signed in to change notification settings - Fork 355
Added litellm model config options and improved _prepare_max_new_tokens
#967
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
response = litellm.completion(**kwargs) | ||
content = response.choices[0].message.content | ||
|
||
if content and "<think>" in content: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is handled but the remove thinking tag option in the cli
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I see! Then I guess this happens outside the model classes, should I just remove that from litellm_model.py
then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah ! you can also make sure that when you use --remove_thinking_tags
it work as expected :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed it, however I was unable to reproduce the case where the reasoning traces are in the output of the model because the reasoning is actually saved under the reasonings
attribute in the ModelResponse
as defined on lines 365-374 here.
I did however verify (using a breakpoint in my debugging config) that remove_reasoning_tags
is executed as part of _post_process_outputs
in the Pipeline
(by default --remove-reasoning-tags
is set to True
). So I think it is safe to remove the code that you mentioned, and to assume that the stripping of reasoning content should work if at some point there is actual reasoning content in the text
attribute of ModelResponse
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good ! only a few questions and good to emrge
…for reasoning models more general
Co-authored-by: Nathan Habib <[email protected]>
1f36913
to
3b6101d
Compare
Background
See Issue #966:
Changes in this PR
This PR introduces the following new options in the
LiteLLMModelConfig
:The increase in the allowed number of tokens (see
_prepare_max_new_tokens
) is now calculated for all models that are recognized as reasoning models by litellm (as indicated by theirsupports_reasoning
function). Instead of having hardcoded upper bounds, we use litellm'sget_max_tokens
helper function, or, if this fails, we query the maximum context length from different endpoints on OpenRouter. If the specified provider is present in that list, we get the information right from OpenRouter. Otherwise, we will choose the minimum context length among all OpenRouter providers to ensure that it works at least with all providers listed there. If this also fails, we will return the default context length of 4096, the same one as currently hardcoded.In order to use the
suggest_reasoning
function of litellm, I had to update the minimum required version of litellm in the pyproject.toml file to 1.66.0.