← Previous · All Episodes · Next →
$\text{Transformer}^2$: Self-adaptive LLMs Episode 388

$\text{Transformer}^2$: Self-adaptive LLMs

· 26:40

|

🤗 Upvotes: 25 | cs.LG, cs.AI, cs.CL

Authors:
Qi Sun, Edoardo Cetin, Yujin Tang

Title:
$\text{Transformer}^2$: Self-adaptive LLMs

Arxiv:
http://arxiv.org/abs/2501.06252v2

Abstract:
Self-adaptive large language models (LLMs) aim to solve the challenges posed by traditional fine-tuning methods, which are often computationally intensive and static in their ability to handle diverse tasks. We introduce $\text{Transformer}^2$, a novel self-adaptation framework that adapts LLMs for unseen tasks in real-time by selectively adjusting only the singular components of their weight matrices. During inference, $\text{Transformer}^2$ employs a two-pass mechanism: first, a dispatch system identifies the task properties, and then task-specific "expert" vectors, trained using reinforcement learning, are dynamically mixed to obtain targeted behavior for the incoming prompt. Our method outperforms ubiquitous approaches such as LoRA, with fewer parameters and greater efficiency. $\text{Transformer}^2$ demonstrates versatility across different LLM architectures and modalities, including vision-language tasks. $\text{Transformer}^2$ represents a significant leap forward, offering a scalable, efficient solution for enhancing the adaptability and task-specific performance of LLMs, paving the way for truly dynamic, self-organizing AI systems.


Subscribe

Listen to Daily Paper Cast using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts
← Previous · All Episodes · Next →