You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It looks like the F1 score and other metrics reported in the paper use the PA adjustment method.
I want to flag that this method has been shown to overestimate performance.
Kim, S., Choi, K., Choi, H. S., Lee, B., & Yoon, S. (2022, June). Towards a rigorous evaluation of time-series anomaly detection. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 7, pp. 7194-7201). Available here.
The issue is widespread in other papers and is being discussed in other projects, such as here: thuml/Anomaly-Transformer#65
It would be helpful to see an updated baseline that uses more robust methods for evaluating results.
The text was updated successfully, but these errors were encountered:
That is indeed a concern. It appears that only within the field of anomaly
detection, there are annual publications highlighting significant issues
with the updating metrics being used, which can lead to disappointment
among researchers and practitioners.
On Thu, Mar 28, 2024 at 1:32 AM Devin Bost ***@***.***> wrote:
It looks like the F1 score and other metrics reported in the paper use the
PA adjustment method.
I want to flag that this method has been shown to overestimate performance.
Kim, S., Choi, K., Choi, H. S., Lee, B., & Yoon, S. (2022, June). Towards
a rigorous evaluation of time-series anomaly detection. In Proceedings of
the AAAI Conference on Artificial Intelligence (Vol. 36, No. 7, pp.
7194-7201). Available here <https://arxiv.org/pdf/2109.05257.pdf>.
The issue is widespread in other papers and is being discussed in other
projects, such as here: thuml/Anomaly-Transformer#65
<thuml/Anomaly-Transformer#65>
It would be helpful to see an updated baseline that uses more robust
methods for evaluating results.
—
Reply to this email directly, view it on GitHub
<#35>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3JGOYZQG3HWGOE2L5RRB3Y2L7BJAVCNFSM6AAAAABFLKIIGOVHI2DSMVQWIX3LMV43ASLTON2WKOZSGIYTCMZYGAYDMNA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
It looks like the F1 score and other metrics reported in the paper use the PA adjustment method.
I want to flag that this method has been shown to overestimate performance.
The issue is widespread in other papers and is being discussed in other projects, such as here: thuml/Anomaly-Transformer#65
It would be helpful to see an updated baseline that uses more robust methods for evaluating results.
The text was updated successfully, but these errors were encountered: