Towards Sharper Generalization Bounds for Adversarial Contrastive Learning

Towards Sharper Generalization Bounds for Adversarial Contrastive Learning

Wen Wen, Han Li, Tieliang Gong, Hong Chen

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 5190-5198. https://doi.org/10.24963/ijcai.2024/574

Recently, the enhancement on the adversarial robustness of machine learning algorithms has gained significant attention across various application domains. Given the widespread label scarcity issue in real-world data, adversarial contrastive learning (ACL) has been proposed to adversarially train robust models using unlabeled data. Despite the empirical success, its generalization behavior remains poorly understood and far from being well-characterized. This paper aims to address this issue from a learning theory perspective. We establish novel high-probability generalization bounds for the general Lipschitz loss functions. The derived bounds scale O(log(k)) with respect to the number of negative samples k, which improves the existing linear dependency bounds. Our results are generally applicable to many prediction models, including linear models and deep neural networks. In particular, we obtain an optimistic generalization bound O(1/n) under the smoothness assumption of the loss function on the sample size n. To the best of our knowledge, this is the first fast-rate bound valid for ACL. Empirical evaluations on real-world datasets verify our theoretical findings.
Keywords:
Machine Learning: ML: Adversarial machine learning
Machine Learning: ML: Learning theory
Machine Learning: ML: Self-supervised Learning