← Previous · All Episodes · Next →
MH-MoE: Multi-Head Mixture-of-Experts Episode 145

MH-MoE: Multi-Head Mixture-of-Experts

· 21:00

|

🤗 Paper Upvotes: 17 | cs.CL

Authors:
Shaohan Huang, Xun Wu, Shuming Ma, Furu Wei

Title:
MH-MoE: Multi-Head Mixture-of-Experts

Arxiv:
http://arxiv.org/abs/2411.16205v2

Abstract:
Multi-Head Mixture-of-Experts (MH-MoE) demonstrates superior performance by using the multi-head mechanism to collectively attend to information from various representation spaces within different experts. In this paper, we present a novel implementation of MH-MoE that maintains both FLOPs and parameter parity with sparse Mixture of Experts models. Experimental results on language models show that the new implementation yields quality improvements over both vanilla MoE and fine-grained MoE models. Additionally, our experiments demonstrate that MH-MoE is compatible with 1-bit Large Language Models (LLMs) such as BitNet.


Subscribe

Listen to Daily Paper Cast using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts
← Previous · All Episodes · Next →