MMPhysVideo: Scaling Physical Plausibility in Video Generation via Joint Multimodal Modeling

1CASIA 2StepFun *Project Lead Corresponding authors

Abstract

Despite advancements in generating visually stunning content, video diffusion models (VDMs) often yield physically inconsistent results due to pixel-only reconstruction. To address this, we propose MMPhysVideo, the first framework to scale physical plausibility in video generation through joint multimodal modeling. We recast perceptual cues, specifically semantics, geometry, and spatio-temporal trajectory, into a unified pseudo-RGB format, enabling VDMs to directly capture complex physical dynamics. To mitigate cross-modal interference, we propose a Bidirectionally Controlled Teacher architecture, which utilizes parallel branches to fully decouple RGB and perception processing and adopts two zero-initialized control links to gradually learn pixel-wise consistency. For inference efficiency, the teacher’s physical prior is distilled into a single-stream student model via representation alignment. Furthermore, we present MMPhysPipe, a scalable data curation and annotation pipeline tailored for constructing physics-rich multimodal datasets. MMPhysPipe employs a vision-language model (VLM) guided by a chain-of-visual-evidence rule to pinpoint physical subjects, enabling expert models to extract multi-granular perceptual information. Without additional inference costs, MMPhysVideo consistently improves physical plausibility and visual quality over advanced models across various benchmarks and achieves state-of-the-art performance compared to existing methods.

Qualitative Comparison Results

Baseline: CogVideoX-2B

CogVideoX-2B
VideoREPA
MMPhysVideo (Ours)

Baseline: CogVideoX-5B

CogVideoX-5B
VideoREPA
MMPhysVideo (Ours)

Baseline: Wan2.1-1.3B

Wan2.1-1.3B
MMPhysVideo (Ours)

Visualization of Joint Multimodal Modeling

RGB
Unified
RGB
XYZ

BibTeX

@article{TODO,
  title={MMPhysVideo: Scaling Physical Plausibility in Video Generation via Joint Multimodal Modeling},
  author={Shubo Lin and Xuanyang Zhang and Wei Cheng and Weiming Hu and Gang Yu and Jin Gao},
  eprint={TODO},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  year={2026},
  url={TODO}
}