HaploVL: A Single-Transformer Baseline for Multi-Modal Understanding

The University of Hong Kong ARC Lab, Tencent Tsinghua University

Abstract

Recent advancements in large language models (LLMs) have significantly propelled the development of large multi-modal models (LMMs), highlighting the potential for general and intelligent assistants. However, most LMMs model visual and textual modalities separately, leading to recent efforts to develop native LMMs using one transformer. Despite the promise, these native models are resource-intensive and often exhibit performance gaps compared to their compositional counterparts. To alleviate this issue, we propose a simple yet efficient method to construct a baseline for the native and end-to-end large multi-modal model in a single transformer. First, we propose a new multi-modal transformer model that can fuse multi-modal inputs in the early stage and respond to visual instructions in an auto-regressive manner. Second, we devise an efficient training recipe for the proposed model, which harnesses the prior knowledge of the pre-trained models, addressing both the performance limitations and the challenge of resource consumption. The proposed model demonstrates superior performance compared to other LMMs using one transformer and significantly narrows the performance gap with compositional LMMs.

Method

Model Architecture:

Method Illustration

Figure 1: The model architecture. A single transformer is used to process multi-modal sequences. Bidriectional mask is utilized for visual tokens and causal mask is utilized for textual tokens.

Training Receipe:

Method Illustration

Figure 2: The training pipeline of our HaploVL. During the pre-training stage (a), the pre-decoder is trained by distilling knowledge from the pre-trained vision encoder and the text embeddings of the LLM. Heads and teacher models are dropped after pre-training. In the full fine-tuning stage (b), the entire model is fine-tuned using visual instruction data.

Results

Main results

Main results

Table 1: Comparison on multi-modal benchmarks. `*' denotes images of related training datasets are observed during training. HaploVL-8B-MI is the model further fine-tuned on multi-image datasets.

Ablation study

Ablation for different LLMs, resolution, and visual instruction data:

ablation study

Table 2: Ablation for different LLMs, resolution (Res.), and visual instruction data (Data-S3). `*' denotes that images from training datasets are used during training.

    🌟 A more advanced language model delivers significantly superior results.
    🌟 Higher input resolution enhances performance, as the LMM can capture finer-grained visual details.
    🌟 Expanding the visual instruction tuning data leads to substantial improvements by enriching the LMM's knowledge.

Ablation for pretraining stage (stage 1):

ablation study

Table 3: Ablation for pretraining stage.

    🌟 pretraining stage accelerates the convergence process.

Compared with the compositional LMM using the same LLM and training data:

ablation study

Table 4: Comparison with LLaVA-1.5-7B on MMVP and MMStar. CP: coarse perception, FP: fine-grained perception, IR: instance reasoning, LR: logical reasoning, ST: science and technology, and MA: mathematics.

case study

Figure 3:Qualitative comparison of LLaVA-1.5-7B and our HaploVL-7B. The first line involves cases about fine-grained perception. The second line includes cases of logical reasoning that depend on fine-grained perception.

    🌟 Early fusion of the textual and visual embeddings is beneficial for fine-grained perception.
attn map

Figure 4: Visualization for the early fusion mechanism of our single transformer. The second row illustrates the attention map of the gray words concerning the vision embeddings after the pre-decoder.

Examples

BibTeX


        @article{yang2024haplo,
          title={HaploVL: A Single-Transformer Baseline for Multi-Modal Understanding},
          author={Yang, Rui and Song, Lin and Xiao, Yicheng and Huang, Runhui and Ge, Yixiao and Shan, Ying and Zhao, Hengshuang},
          journal={arXiv preprint arXiv:xxxx.xxxxx},
          year={2025}
        }