-
Notifications
You must be signed in to change notification settings - Fork 4.8k
Open
Description
I'm not making a pull request because I'm not sure if this is something that should be default behavior, or if it should be implemented as an option.
I'm using the server
example, but would like it to return the currently-loaded model with each transcription. This is in case the /load
endpoint is used, and the model has changed since server
was run.
Here are the changes I made to implement this for the default JSON type response. I know the Verbose JSON is intended to match OpenAI's, so I don't know whether the community would like it added there or not.
In server.cpp
:
- At line 914, update JSON response by adding the "model" field.
json jres = json{
{"text", results},
{"model", default_params.model.c_str()}
};
- Add the following line before line 958 so that the
params
data structure is updated with the new model after it is verified to exist:
default_params.model = model.c_str();
Thoughts/feedback?
Updated to use default_params
variable instead of params
Metadata
Metadata
Assignees
Labels
No labels