Memorizing Documents with Guidance in Large Language Models

Memorizing Documents with Guidance in Large Language Models

Bumjin Park, Jaesik Choi

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 6460-6468. https://doi.org/10.24963/ijcai.2024/714

Training data plays a pivotal role in AI models. Large language models (LLMs) are trained with massive amounts of documents, and their parameters hold document-related contents. Recently, several studies identified content-specific locations in LLMs by examining the parameters. Instead of the post hoc interpretation, we propose another approach. We propose document-wise memory architecture to track document memories in training. The proposed architecture maps document representations to memory entries, which softly mask memories in the forward process of LLMs. Additionally, we propose document guidance loss, which increases the likelihood of text with document memories and reduces the likelihood of the text with the memories of other documents. Experimental results on Wikitext-103-v1 with Pythia-1B show that the proposed methods provide different memory entries for documents and high recall of document-related content in generation with trained document-wise memories.
Keywords:
Natural Language Processing: NLP: Language models
Natural Language Processing: NLP: Embeddings
Natural Language Processing: NLP: Interpretability and analysis of models for NLP
Natural Language Processing: NLP: Language generation