Perceptually Guided 3DGS Streaming and Rendering for Virtual Reality

Perceptually Guided 3DGS Streaming and Rendering for Virtual Reality

Yunxiang Zhang*, Sai Harsha Mupparaju*, Kenneth Chen, Jenna Kang, Xinyu Zhang, Maito Omori, Kazuyuki Arimatsu, Qi Sun
WACV 2026
PDF

Abstract

Recent breakthroughs in radiance fields, particularly 3D Gaussian Splatting (3DGS), have unlocked real-time, highquality rendering of complex environments, enabling a wide range of applications. However, the stringent requirements of mixed reality (MR) rendering, such as rapid refresh rates, high-resolution stereo viewing, and constrained computing budgets, remain out of reach for current 3DGS techniques. Nevertheless, the wide field-of-view design of MR displays, which mimics human vision, presents a unique opportunity to exploit human visual system’s own perceptual limitations to reduce computational overhead while not compromising user-perceived rendering quality.

To this end, we propose a perception-guided, continuous level-of-detail (LOD) framework for 3DGS that maximizes perceived quality under given compute resources. We distill a visual quality metric, which encodes the spatial, temporal, and peripheral characteristics of human visual perception, into a lightweight, gaze-contingent model that predicts and adaptively modulates rendering LOD across the visual field based on each region’s contributions to perceptual quality. This budget-driven LOD modulation, guided by both scene content and gaze behavior, enables significant computation reduction with minimal loss in perceived quality. To support low-power, untethered MR setups, we design an edge-cloud collaborative rendering framework to partially offload computation to the cloud, further reducing overhead on the edge MR devices. Objective metrics and MR user study evidence that, compared to vanilla and foveated LOD baselines, our method achieves superior trade-offs between computational efficiency and user-perceived visual quality.

Acknowledgement

This research is supported by a SONY Focused Research Award.