-
Notifications
You must be signed in to change notification settings - Fork 13.2k
Add support for Qwen3-Reranker #15824
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Great - will take a look tomorrow. Would be useful to add a basic usage example in the OP of this PR. |
Yup, will add a usage example up above. Actually encountering some numerical differences comparing the output here to the |
9d73260
to
22dd428
Compare
Ok, finally fixed it! Now we have numerical parity with the HF implementation. It turned out to be a small difference in the chat template. Should be ready for review @ggerganov. |
bool last = cparams.pooling_type == LLAMA_POOLING_TYPE_LAST; | ||
const bool last = ( | ||
cparams.pooling_type == LLAMA_POOLING_TYPE_LAST || | ||
(cparams.pooling_type == LLAMA_POOLING_TYPE_RANK && arch == LLM_ARCH_QWEN3) // qwen3 reranking & embedding models use last token |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering if it makes sense to remove pooling type RANK all together from libllama
? Do you have any thoughts about if having a separate pooling class RANK
is really necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you could get really close to merging RANK with LAST. The main differentiator is in llm_graph_context::build_pooling
where you apply cls_out
to map from the last token of the last hidden state to the classification output (usual yes/no). Unlike the other pooling types, you actually need knowledge of the model weights to do the calculation.
Add support for Qwen3 reranking models. This is largely based on #14029 by @ngxson, with a few tweaks to reflect changes to the codebase in the interim.
This hardcodes the chat template provided by in the
README.md
, which I'm assuming is the intended usage. If folks want to be able to change that, then we'd need a new CLI option. The template uses string substitution rather than jinja, as it seems like jinja is only used for chat messages.Edit: Here's an example usage similar to that used in the official Qwen repo. Note that
\t
separates queries from documents and\n
separates different prompts.build/bin/llama-embedding -m qwen3-reranker-0.6b-f32.gguf --embd-normalize -1 -p "What is the capital of China?\tThe capital of China is Beijing.\nExplain gravity\tGravity is a force that attracts two bodies towards each other."
Notice that we need to pass
--embd-normalize -1
to disable normalization (the default is L2 norm).