Skip to content

[Bug] FusedMoE does not recognize ModelOpt fp8 format. #6714

@michaelfeil

Description

@michaelfeil

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

I quantized a llama-4-fp8 version with modelopt: https://huggingface.co/baseten/Llama-4-Scout-17B-16E-fp8
Currently other non-moe checkpoints are working https://huggingface.co/nvidia/Llama-3.1-8B-Instruct-FP8

Reproduction

Environment

Metadata

Metadata

Assignees

Labels

bugSomething isn't workinginactive

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions