Deep Multi Depth Panoramas for View Synthesis
Deep Multi Depth Panoramas for View Synthesis
Kai-En Lin, Zexiang Xu, Ben Mildenhall, Pratul P. Srinivasan, Yannick Hold-Geoffroy, Stephen DiVerdi, Qi Sun, Kalyan Sunkavalli, Ravi RamamoorthiECCV 2020
PDF Video Code
Abstract
We propose a learning-based approach for novel view synthesis for multi-camera 360° panorama capture rigs. Previous work constructs RGBD panoramas from such data, allowing for view synthesis with small amounts of translation, but cannot handle the disocclusions and view-dependent effects that are caused by large translations. To address this issue, we present a novel scene representation – Multi Depth Panorama (MDP) – that consists of multiple RGBDα panoramas that represent both scene geometry and appearance. We demonstrate a deep neural network-based method to reconstruct MDPs from multi-camera 360° images. MDPs are more compact than previous 3D scene representations and enable high-quality, efficient new view rendering. We demonstrate this via experiments on both synthetic and real data and comparisons with previous state-of-the-art methods spanning both learning-based approaches and classical RGBD-based methods.

Bibtex
@inproceedings{Lin:2020:DMP,
title={Deep Multi-Depth Panoramas for View Synthesis},
author={Lin, Kai-En and Xu, Zexiang and Mildenhall, Ben and Srinivasan, Pratul P and Hold-Geoffroy, Yannick and DiVerdi, Stephen and Sun, Qi and Sunkavalli, Kalyan and Ramamoorthi, Ravi},
booktitle={Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16},
pages={328–344},
year={2020},
organization={Springer}
}