Omnidirectional Stereo Dataset


We present synthetic datasets for the omnidirectional stereo. We virtually implement the camera rig with four mounted fisheye cameras. These datasets were rendered using Blender.

Contact: Changhee Won (chwon@hanyang.ac.kr)

Synthetic Urban Datasets

Each dataset consists of 1000 sequential frames of city landscapes, and we split them into two parts, the former 700 frames for training and the later 300 for testing.

Sunny
Cloudy
Sunset

Input images

Front within 220° FOV
Right
Rear
Left

Omnidirectional depth map

Inverse depth map
Reference panorama

Download

OmniHouse

OmniHouse consists of synthesized indoor scenes which reproduced using the models in SUNCG dataset [2] and a few additional models. We collect 451 house models and present 2048 frames for training and 512 for test.

Input images

Front within 220° FOV
Right
Rear
Left

Omnidirectional depth map

Inverse depth map
Reference panorama

Download

OmniThings

OmniThings consists of randomly generated objects around the camera rig. We collect 33474 3D object models from ShapeNet [3] and present 9216 scenes for training and 1024 for test.

Input images

Front within 220° FOV
Right
Rear
Left

Omnidirectional depth map

Inverse depth map
Reference panorama

Download

Paper


  • Changhee Won, Jongbin Ryu, and Jongwoo Lim, "OmniMVS: End-to-End Learning for Omnidirectional Stereo Matching", in ICCV 2019. [video]
  • Changhee Won, Jongbin Ryu, and Jongwoo Lim, "SweepNet: Wide-baseline Omnidirectional Depth Estimation", in ICRA 2019. [arxiv] [video] [code]

Citation


Will be updated soon


				

License


These datasets are released under the Creative Commons license (CC BY-NC-SA 3.0), which is free for non-commercial use (including research).

Reference


  • [1] Zhang et al., "Benefit of Large Field-of-View Cameras for Visual Odometry", in ICRA 2016. [link]
  • [2] Song et al., "Semantic Scene Completion from a Single Depth Image", in CVPR 2017, [link]
  • [3] Chang et al., "ShapeNet: An Information-Rich 3D Model Repository" [link]