Purpose Enhanced Reasoning through Iterative Prompting: Uncover Latent Robustness of ChatGPT on Code Comprehension
Purpose Enhanced Reasoning through Iterative Prompting: Uncover Latent Robustness of ChatGPT on Code Comprehension
Yi Wang, Qidong Zhao, Dongkuan Xu, Xu Liu
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 6513-6521.
https://doi.org/10.24963/ijcai.2024/720
Code comments are crucial for gaining in-depth insights to facilitate code comprehension. The key to obtaining these insights lies in precisely summarizing the main purpose of the code. Recent approaches on code comment generation lie in prompting large language models (LLMs) such as ChatGPT, instead of training/fine-tuning specific models. Although ChatGPT demonstrates an impressive performance in code comprehension, it still suffers from robustness challenges in consistently producing high-quality code comments. This is because ChatGPT prioritizes the semantics of code tokens, which makes it vulnerable to commonly encountered benign perturbations such as variable name replacements. This study proposes a modular prompting paradigm Perthept to effectively mitigate the negative effects caused by such minor perturbations. Perthept iteratively enhances the reasoning depth to reach the main purpose of the code. Perthept demonstrates robustness under the scenario where there is stochasticity or unreliability in ChatGPT's responses. We give a comprehensive evaluation across four public datasets to show the consistent robustness improvement with our proposed methodology over other models.
Keywords:
Natural Language Processing: NLP: Summarization
Natural Language Processing: NLP: Tools