Skip to content

Conversation

cyanguwa
Copy link
Collaborator

Description

This PR adds max_score support in FusedAttention and UnfusedDotProductAttention backends in TE-PyTorch. The max_score can be used by a MuonClip optimizer. This PR only support FP16/BF16 and non-CP cases.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • Add max_score support in FusedAttention and UnfusedDotProductAttention

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

@cyanguwa cyanguwa added the 2.9.0 label Sep 22, 2025
if config.return_max_score:
out = (out, max_score)
else:
out = (out, None)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This changes returned values when max score is not requested. Should we keep backward compatibility if new feature is not used?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants