Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Toxicity Scanner to return the type of content #111

Open
RQledotai opened this issue Mar 18, 2024 · 1 comment
Open

Toxicity Scanner to return the type of content #111

RQledotai opened this issue Mar 18, 2024 · 1 comment

Comments

@RQledotai
Copy link

When using the input or output toxicity scanner, it would be preferrable to return the type of label (e.g. sexual_explicit) instead of the offensive content. It would enable applications to communicate the issue.

@asofter
Copy link
Collaborator

asofter commented Mar 22, 2024

Hey @RQledotai , thanks for reaching out. Apologies for the delay.

I agree, and such refactoring is in works to actually return an object with more context about the reason behind blocking. Currently, the only way to monitor is logs.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants