CADET: Calibrated Anomaly Detection for Mitigating Hardness Bias
CADET: Calibrated Anomaly Detection for Mitigating Hardness Bias
Ailin Deng, Adam Goodge, Lang Yi Ang, Bryan Hooi
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 2002-2008.
https://doi.org/10.24963/ijcai.2022/278
The detection of anomalous samples in large, high-dimensional datasets is a challenging task with numerous practical applications. Recently, state-of-the-art performance is achieved with deep learning methods: for example, using the reconstruction error from an autoencoder as anomaly scores. However, the scores are uncalibrated: that is, they follow an unknown distribution and lack a clear interpretation. Furthermore, the reconstruction error is highly influenced by the `hardness' of a given sample, which leads to false negative and false positive errors. In this paper, we empirically show the significance of this hardness bias present in a range of recent deep anomaly detection methods. To mitigate this, we propose an efficient and plug-and-play error calibration method which mitigates this hardness bias in the anomaly scoring without the need to retrain the model. We verify the effectiveness of our method on a range of image, time-series, and tabular datasets and against several baseline methods.
Keywords:
Data Mining: Anomaly/Outlier Detection
AI Ethics, Trust, Fairness: Trustworthy AI