CIC: A Framework for Culturally-Aware Image Captioning

CIC: A Framework for Culturally-Aware Image Captioning

Youngsik Yun, Jihie Kim

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 1625-1633. https://doi.org/10.24963/ijcai.2024/180

Image Captioning generates descriptive sentences from images using Vision-Language Pre-trained models (VLPs) such as BLIP, which has improved greatly. However, current methods lack the generation of detailed descriptive captions for the cultural elements depicted in the images, such as the traditional clothing worn by people from Asian cultural groups. In this paper, we propose a new framework, Culturally-aware Image Captioning (CIC), that generates captions and describes cultural elements extracted from cultural visual elements in images representing cultures. Inspired by methods combining visual modality and Large Language Models (LLMs) through appropriate prompts, our framework (1) generates questions based on cultural categories from images, (2) extracts cultural visual elements from Visual Question Answering (VQA) using generated questions, and (3) generates culturally-aware captions using LLMs with the prompts. Our human evaluation conducted on 45 participants from 4 different cultural groups with a high understanding of the corresponding culture shows that our proposed framework generates more culturally descriptive captions when compared to the image captioning baseline based on VLPs. Resources can be found at https://shane3606.github.io/cic.
Keywords:
Computer Vision: CV: Bias, fairness and privacy
Computer Vision: CV: Scene analysis and understanding   
Computer Vision: CV: Vision, language and reasoning