tyfeld/MMaDA-Parallel: Official Implementation of “MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation”

parallel generation demo

Demo: Parallel text-image generation in action.

While the goal of thought-aware generation is to improve performance on complex tasks, we identify a critical failure mode where existing sequential, autoregressive approaches can degrade performance due to error propagation. To systematically analyze this issue, we propose parabenchA new benchmark designed to evaluate both text and image output modalities. Our analysis using Parabench shows that this performance degradation is strongly related to poor alignment between the generated logic and the final image. To solve this, we propose a parallel multimodal diffusion framework that enables continuous, bidirectional interaction between text and images throughout the denoising trajectory. this model, MMADA-parallelis trained with supervised finetuning and then further optimized by parallel reinforcement learning (ParaRL), a novel strategy that applies semantic rewards along trajectories to enforce cross-modal stability. Experiments confirm that our approach significantly improves cross-modal alignment and semantic consistency, achieving a 6.9% improvement. output alignment At Parabench, a more robust paradigm for deliberative image synthesis has been established than the state-of-the-art model Bagel.

method

Architecture of MMADA-Parallel. During training, image and text responses are hidden and predicted in parallel by a uniform mask predictor. During sampling, the model performs parallel decoding to jointly generate both image and text responses, enabling continuous cross-modal interaction.

main results

Qualitative comparison.

main results

Quantitative results on parabench.

Comment: Our model has been successfully validated focusing on synthetic datasets Environment, still life, architecture and natural landscapes. Its performance on out-of-distribution inputs – such as human faces or real-world photographic imagery – has not yet been fully explored. We are actively expanding our training corpus to include more diverse datasets.

First, start with a Torch environment with Torch 2.3.1 or higher, then install the following dependencies:

pip install -r requirements.txt

We provide two variants of MMADA-parallel with different tokenizers. MMaDA-Parallel-A is trained with the tokenizer Amused-VQ, and MMaDA-Parallel-M is trained with the tokenizer Magvitv2.

2. Experiencing Parallel Zen with MMADA-Parallel-A

You can directly use the local Gradio app to experience parallel generation with mmada-parallel-a:

Or you can use the inference script to generate parallel generation results:

cd MMaDA-Parallel-A
python inference.py \
    --checkpoint tyfeld/MMaDA-Parallel-A \
    --vae_ckpt tyfeld/MMaDA-Parallel-A \
    --prompt "Replace the laptops with futuristic transparent tablets displaying holographic screens, and change the drink to a cup of glowing blue energy drink." \
    --image_path examples/image.png \
    --height 512 \
    --width 512 \
    --timesteps 64 \
    --text_steps 128 \
    --text_gen_length 256 \
    --text_block_length 32 \
    --cfg_scale 0 \
    --cfg_img 4.0 \
    --temperature 1.0 \
    --text_temperature 0 \
    --seed 42 \
    --output_dir output/results_interleave

3. Parallel General with MMADA-Parallel-M

cd MMaDA-Parallel-M
python inference.py interleave_root=./interleave_validation  
@article{tian2025mmadaparallel,
  title={MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation},
  author={Tian, Ye and Yang, Ling and Yang, Jiongfan and Wang, Anran and Tian, Yu and Zheng, Jiani and Wang, Haochen and Teng, Zhiyang and Wang, Zhuochen and Wang, Yinjie and Tong, Yunhai and Wang, Mengdi and Li, Xiangtai},
  journal={arXiv preprint arXiv:2511.09611},
  year={2025}
}

This work is largely based on MMaDA and Lumina-DiMOO. Thanks to all the authors for their great work.



Leave a Comment