Skip to content

Add probability to Accuracy #1354

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
wants to merge 7 commits into from
Closed

Conversation

vcarpani
Copy link
Contributor

@vcarpani vcarpani commented Oct 4, 2020

Fixes #1089

Description:

  • This PR improves the API for Accuracy according to the suggestions from the issue.

Still WIP, to be done:

  • Support custom binarization threshold.
  • Support logits input.

Check list:

  • New tests are added (if a new feature is added)
  • New doc strings: description and/or example code are in RST format
  • Documentation is updated (if required)

@vcarpani vcarpani changed the title WIP: Add probability to precision. Add probability to precision. Oct 9, 2020
@vfdev-5 vfdev-5 changed the title Add probability to precision. Add probability to Accuracy Oct 18, 2020
Copy link
Collaborator

@vfdev-5 vfdev-5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vcarpani thanks for the PR and sorry for the delay. I didn't see that it is not draft anymore.
I left few comments to improve it and we have to discuss a bit on probably misleading terms...

def __init__(
self,
output_transform: Callable = lambda x: x,
is_multilabel: bool = False,
device: Union[str, torch.device] = torch.device("cpu"),
mode: Mode = Mode.LABELS,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about correctness of name LABELS for multi-class case where we require to pass probas or logits and then take argmax.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about UNCHANGED or RAW_INPUT?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, something like RAW_INPUT could work. I'm thinking if we could not generalize this new options to all possible inputs and metric type: binary, multiclass, multilabel...

@vcarpani
Copy link
Contributor Author

@vfdev-5 the last commit should solve your comments

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Oct 25, 2020

@vcarpani thanks for the update! I left a comment in a discussion thread.
I'm thinking if it could be possible to replace argmax op for multiclass type using these new options. A similar proposition previously was for Confusion Matrix (#822) but something was a blocker for that.

@vfdev-5 vfdev-5 added the hacktoberfest-accepted For accepted PRs label Oct 25, 2020
@vcarpani
Copy link
Contributor Author

I'll try to implement that behaviour somewhere next week ;)

@vcarpani
Copy link
Contributor Author

I'll try to implement that behaviour somewhere next week ;)

@vfdev-5, just wanted to update you on this one, I've been quite busy recently, but I started working on it

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Nov 19, 2020

@vcarpani thanks for the head-up :)

@Yura52
Copy link

Yura52 commented Apr 20, 2021

I would like to add that "logits" in the context of multiclass problems usually mean the input to the softmax (not the sigmoid) function. I am not sure if there are any use cases for the current Mode.LOGITS implementation in the multiclass setting. As for Mode.PROBABILITIES, it looks fine.

@vcarpani vcarpani closed this Mar 24, 2023
@vcarpani vcarpani deleted the accurcay branch March 24, 2023 08:16
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
hacktoberfest-accepted For accepted PRs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Possible improvements for Accuracy
3 participants