← Previous · All Episodes · Next →
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training Episode 570

How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training

· 24:39

|

🤗 Upvotes: 16 | cs.LG, cs.AI, cs.CL, cs.CV, cs.HC

Authors:
Yixin Ou, Yunzhi Yao, Ningyu Zhang, Hui Jin, Jiacheng Sun, Shumin Deng, Zhenguo Li, Huajun Chen

Title:
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training

Arxiv:
http://arxiv.org/abs/2502.11196v1

Abstract:
Despite exceptional capabilities in knowledge-intensive tasks, Large Language Models (LLMs) face a critical gap in understanding how they internalize new knowledge, particularly how to structurally embed acquired knowledge in their neural computations. We address this issue through the lens of knowledge circuit evolution, identifying computational subgraphs that facilitate knowledge storage and processing. Our systematic analysis of circuit evolution throughout continual pre-training reveals several key findings: (1) the acquisition of new knowledge is influenced by its relevance to pre-existing knowledge; (2) the evolution of knowledge circuits exhibits a distinct phase shift from formation to optimization; (3) the evolution of knowledge circuits follows a deep-to-shallow pattern. These insights not only advance our theoretical understanding of the mechanisms of new knowledge acquisition in LLMs, but also provide potential implications for improving continual pre-training strategies to enhance model performance. Code and data will be available at https://github.com/zjunlp/DynamicKnowledgeCircuits.


Subscribe

Listen to Daily Paper Cast using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts
← Previous · All Episodes · Next →