← Previous · All Episodes · Next →
MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal Models Episode 476

MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal Models

· 24:39

|

🤗 Upvotes: 15 | cs.AI, cs.CV

Authors:
Huanqia Cai, Yijun Yang, Winston Hu

Title:
MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal Models

Arxiv:
http://arxiv.org/abs/2502.00698v1

Abstract:
IQ testing has served as a foundational methodology for evaluating human cognitive capabilities, deliberately decoupling assessment from linguistic background, language proficiency, or domain-specific knowledge to isolate core competencies in abstraction and reasoning. Yet, artificial intelligence research currently lacks systematic benchmarks to quantify these critical cognitive dimensions in multimodal systems. To address this critical gap, we propose MM-IQ, a comprehensive evaluation framework comprising 2,710 meticulously curated test items spanning 8 distinct reasoning paradigms. Through systematic evaluation of leading open-source and proprietary multimodal models, our benchmark reveals striking limitations: even state-of-the-art architectures achieve only marginally superior performance to random chance (27.49% vs. 25% baseline accuracy). This substantial performance chasm highlights the inadequacy of current multimodal systems in approximating fundamental human reasoning capacities, underscoring the need for paradigm-shifting advancements to bridge this cognitive divide.


Subscribe

Listen to Daily Paper Cast using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts
← Previous · All Episodes · Next →