Segmenting Transparent Objects in the Wild with Transformer
Segmenting Transparent Objects in the Wild with Transformer
Enze Xie, Wenjia Wang, Wenhai Wang, Peize Sun, Hang Xu, Ding Liang, Ping Luo
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 1194-1200.
https://doi.org/10.24963/ijcai.2021/165
This work presents a new fine-grained transparent object segmentation dataset, termed Trans10K-v2, extending Trans10K-v1, the first large-scale transparent object segmentation dataset.
Unlike Trans10K-v1 that only has two limited categories, our new dataset has several appealing benefits. (1) It has 11 fine-grained categories of transparent objects, commonly occurring in the human domestic environment, making it more practical for real-world application.
(2) Trans10K-v2 brings more challenges for the current advanced segmentation methods than its former version.
Furthermore, a novel Transformer-based segmentation pipeline termed Trans2Seg is proposed.
Firstly, the Transformer encoder of Trans2Seg provides the global receptive field in contrast to CNN's local receptive field, which shows excellent advantages over pure CNN architectures.
Secondly, by formulating semantic segmentation as a problem of dictionary look-up, we design a set of learnable prototypes as the query of Trans2Seg's Transformer decoder, where each prototype learns the statistics of one category in the whole dataset.
We benchmark more than 20 recent semantic segmentation methods, demonstrating that Trans2Seg significantly outperforms all the CNN-based methods, showing the proposed algorithm's potential ability to solve transparent object segmentation.Code is available in https://github.com/xieenze/Trans2Seg.
Keywords:
Computer Vision: Perception
Computer Vision: Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation