MeshSplat: Generalizable Sparse-View Surface Reconstruction via Gaussian Splatting

1 University of Science and Technology of China, 2 Shanghai Artificial Intelligence Laboratory

TL;DR: We introduce MeshSplat, a generalizable sparse-view surface reconstruction framework via Gaussian splatting.

Meshsplat enables high-quality surface reconstruction from sparse-view images.


Method Overview

Overview of MeshSplat. Taken a pair of images as input, MeshSplat begins with a multi-view backbone to extract feature maps for each view. After that, we construct per-view cost volumes via the plane-sweeping method. We use these cost volumes to generate coarse depth maps in order to get 3D point clouds and apply our proposed Weighted Chamfer Distance Loss. Then we feed cost volumes and feature maps into our gaussian prediction network, which consist of a depth refinement network and a normal prediction network, to obtain pixel-aligned 2DGS. Finally we can apply novel view synthesis and reconstruct the scene mesh using these 2DGS.



Reconstruction Results




MVSplat MeshSplat

Reconstruction results on Re10K dataset.

MVSplat MeshSplat

Reconstruction results on Scannet dataset.

Visualizations of output depth and normal maps, confidence maps used in WCD loss and kappa maps used in normal loss. The confidence maps reflect the unconfident matching areas like texture-less areas and non-overlapped areas between the two views. For kappa maps, areas with higher uncertainty typically correspond to object boundaries.



BibTeX

@article{chang2025meshsplat,
      title={MeshSplat: Generalizable Sparse-View Surface Reconstruction via Gaussian Splatting}, 
      author={Hanzhi Chang and Ruijie Zhu and Wenjie Chang and Mulin Yu and Yanzhe Liang and Jiahao Lu and Zhuoyuan Li and Tianzhu Zhang},
      journal={arXiv preprint arXiv:2508.17811},
      year={2025}
    }