← Previous · All Episodes · Next →
Multimodal Prompt Optimization: Why Not Leverage Multiple Modalities for MLLMs Episode 1269

Multimodal Prompt Optimization: Why Not Leverage Multiple Modalities for MLLMs

· 25:37

|

🤗 Upvotes: 37 | cs.LG, cs.AI, cs.CL

Authors:
Yumin Choi, Dongki Kim, Jinheon Baek, Sung Ju Hwang

Title:
Multimodal Prompt Optimization: Why Not Leverage Multiple Modalities for MLLMs

Arxiv:
http://arxiv.org/abs/2510.09201v1

Abstract:
Large Language Models (LLMs) have shown remarkable success, and their multimodal expansions (MLLMs) further unlock capabilities spanning images, videos, and other modalities beyond text. However, despite this shift, prompt optimization approaches, designed to reduce the burden of manual prompt crafting while maximizing performance, remain confined to text, ultimately limiting the full potential of MLLMs. Motivated by this gap, we introduce the new problem of multimodal prompt optimization, which expands the prior definition of prompt optimization to the multimodal space defined by the pairs of textual and non-textual prompts. To tackle this problem, we then propose the Multimodal Prompt Optimizer (MPO), a unified framework that not only performs the joint optimization of multimodal prompts through alignment-preserving updates but also guides the selection process of candidate prompts by leveraging earlier evaluations as priors in a Bayesian-based selection strategy. Through extensive experiments across diverse modalities that go beyond text, such as images, videos, and even molecules, we demonstrate that MPO outperforms leading text-only optimization methods, establishing multimodal prompt optimization as a crucial step to realizing the potential of MLLMs.


Subscribe

Listen to Daily Paper Cast using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts YouTube
← Previous · All Episodes · Next →