Why Only Text: Empowering Vision-and-Language Navigation with Multi-modal Prompts
Why Only Text: Empowering Vision-and-Language Navigation with Multi-modal Prompts
Haodong Hong, Sen Wang, Zi Huang, Qi Wu, Jiajun Liu
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 839-847.
https://doi.org/10.24963/ijcai.2024/93
Current Vision-and-Language Navigation (VLN) tasks mainly employ textual instructions to guide agents. However, being inherently abstract, the same textual instruction can be associated with different visual signals, causing severe ambiguity and limiting the transfer of prior knowledge in the vision domain from the user to the agent. To fill this gap, we propose Vision-and-Language Navigation with Multi-modal Prompts (VLN-MP), a novel task augmenting traditional VLN by integrating both natural language and images in instructions. VLN-MP not only maintains backward compatibility by effectively handling text-only prompts but also consistently shows advantages with different quantities and relevance of visual prompts. Possible forms of visual prompts include both exact and similar object images, providing adaptability and versatility in diverse navigation scenarios. To evaluate VLN-MP under a unified framework, we implement a new benchmark that offers: (1) a training-free pipeline to transform textual instructions into multi-modal forms with landmark images; (2) diverse datasets with multi-modal instructions for different downstream tasks; (3) a novel module designed to process various image prompts for seamless integration with state-of-the-art VLN models. Extensive experiments on four VLN benchmarks (R2R, RxR, REVERIE, CVDN) show that incorporating visual prompts would significantly boost navigation performance. While maintaining efficiency with text-only prompts, VLN-MP enables agents to navigate in the pre-explore setting and outperform text-based models, showing its broader applicability. Code is available at https://github.com/honghd16/VLN-MP.
Keywords:
Computer Vision: CV: Vision, language and reasoning
Computer Vision: CV: Multimodal learning
Machine Learning: ML: Multi-modal learning