NeuroSymbolic LLM for Mathematical Reasoning and Software Engineering

NeuroSymbolic LLM for Mathematical Reasoning and Software Engineering

Prithwish Jana

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Doctoral Consortium. Pages 8492-8493. https://doi.org/10.24963/ijcai.2024/961

In recent years, there has been a significant interest in Large Language Models (LLMs) owing to their notable performance in natural language processing (NLP) tasks. However, while their results show promise in mathematical reasoning and software engineering tasks, LLMs have not yet achieved a satisfactory performance level in these domains. In response, current approaches have prioritized scaling up the size of LLMs, necessitating substantial computational resources and data. Our objective, however, is to pursue a different path by developing neurosymbolic language models. We propose to integrate logical and symbolic feedback during the training process, enabling significantly smaller language models to achieve far better reasoning capabilities than the LLMs currently in use.
Keywords:
DC: Natural Language Processing
DC: Knowledge Representation and Reasoning
DC: Machine Learning