Title: Robust temporal depth enhancement method for dynamic virtual view synthesis
Authors: Liu, Can
Zhang, Weizheng
Citation: WSCG 2015: full papers proceedings: 23rd International Conference in Central Europeon Computer Graphics, Visualization and Computer Visionin co-operation with EUROGRAPHICS Association, p. 191-200.
Issue Date: 2015
Publisher: Václav Skala - UNION Agency
Document type: konferenční příspěvek
conferenceObject
URI: wscg.zcu.cz/WSCG2015/CSRN-2501.pdf
http://hdl.handle.net/11025/29517
ISBN: 978-80-86943-65-7 (print)
978-80-86943-61-9 (CD-ROM)
ISSN: 2464–4617 (print)
2464–4625 (CD-ROM)
Keywords: FTV;DIBR;časová konzistence;zvýšení hloubky
Keywords in different language: FTV;DIBR;temporal consistency;depth enhancement
Abstract: Depth-image-based rendering (DIBR) is a view synthesis technique that generates virtual views by warping from the reference images based on depth maps. The quality of synthesized views highly depends on the accuracy of depth maps. However, for dynamic scenarios, depth sequences obtained through stereo matching methods frame by frame can be temporally inconsistent, especially in static regions, which leads to uncomfortable flickering artifacts in synthesized videos. This problem can be eliminated by depth enhancement methods that perform temporal filtering to suppress depth inconsistency, yet those methods may also spread depth errors. Although these depth enhancement algorithms increase the temporal consistency of synthesized videos, they have the risk of reducing the quality of rendered videos. Since conventional methods may not achieve both properties, in this paper, we present for static regions a robust temporal depth enhancement (RTDE) method, which propagates exactly the reliable depth values into succeeding frames to upgrade not only the accuracy but also the temporal consistency of depth estimations. This technique benefits the quality of synthesized videos. In addition we propose a novel evaluation metric to quantitatively compare temporal consistency between our method and the state of arts. Experimental results demonstrate the robustness of our method for dynamic virtual view synthesis, not only the temporal consistency but also the quality of synthesized videos in static regions are improved.
Rights: © Václav Skala - UNION Agency
Appears in Collections:WSCG 2015: Full Papers Proceedings

Files in This Item:
File Description SizeFormat 
Liu.pdfPlný text1,49 MBAdobe PDFView/Open


Please use this identifier to cite or link to this item: http://hdl.handle.net/11025/29517

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.