Skip to content

Provide BinaryAUROC support for Masked/Sparse Labels #3096

@robertreaney

Description

@robertreaney

🚀 Feature

Currently the BinaryAUROC metric update step expects a square matrix at each iteration. In a multi-output training regiment involving masked outputs, the current AUROC metrics class fails to offer support.

This feature proposes to provide an additional metric, MaskedBinaryAUROC, to support calculating label-specific and aggregate AUROC when each batch step leverages sparse/nested/masked labels.

Motivation

I had to create a custom class for this functionality for a professional project, and I'd like to contribute it. Also, my company would like to become involved in the open source community by promoting code when possible.

Pitch

Create a MaskedBinaryAUROC metric with similar functionality to AUROC, but it will also take a mask at each iteration and only consider unmasked values in the final calculations.

The implementation is straightforward:

  1. preds/targets/mask states with List defaults
  2. .update(preds, targets, mask) by appending to the list
  3. .compute() will iterate through each column to calculate a per-label value by leveraging torch.metrics.functional.binary_auroc after applying the mask
  4. return mean of all column-wise auroc values

Alternatives

  • SparseBinaryAUROC: support a sparse tensor representation of the output labels instead of a dense output and a mask.
  • Extend functionality of BinaryAUROC instead of making a separate class

Additional context

There are additional related extensions of the AUROC, MulticlassAUROC, and MultilabelAUROC classes.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions