DenseDINO: Boosting Dense Self-Supervised Learning with Token-Based Point-Level Consistency
DenseDINO: Boosting Dense Self-Supervised Learning with Token-Based Point-Level Consistency
Yike Yuan, Xinghe Fu, Yunlong Yu, Xi Li
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 1695-1703.
https://doi.org/10.24963/ijcai.2023/188
In this paper, we propose a simple yet effective transformer framework for self-supervised learning called DenseDINO to learn dense visual representations. To exploit the spatial information that the dense prediction tasks require but neglected by the existing self-supervised transformers, we introduce point-level supervision across views in a novel token-based way. Specifically, DenseDINO introduces some extra input tokens called reference tokens to match the point-level features with the position prior. With the reference token, the model could maintain spatial consistency and deal with multi-object complex scene images, thus generalizing better on dense prediction tasks. Compared with the vanilla DINO, our approach obtains competitive performance when evaluated on classification in ImageNet and achieves a large margin (+7.2% mIoU) improvement in semantic segmentation on PascalVOC under the linear probing protocol for segmentation.
Keywords:
Computer Vision: CV: Transfer, low-shot, semi- and un- supervised learning
Computer Vision: CV: Representation learning