Full metadata record
DC poleHodnotaJazyk
dc.contributor.authorLiu, Can
dc.contributor.authorZhang, Weizheng
dc.contributor.editorSkala, Václav
dc.contributor.editorGavrilova, Marina
dc.date.accessioned2018-04-09T07:44:13Z-
dc.date.available2018-04-09T07:44:13Z-
dc.date.issued2015
dc.identifier.citationWSCG 2015: full papers proceedings: 23rd International Conference in Central Europeon Computer Graphics, Visualization and Computer Visionin co-operation with EUROGRAPHICS Association, p. 191-200.en
dc.identifier.isbn978-80-86943-65-7 (print)
dc.identifier.isbn978-80-86943-61-9 (CD-ROM)
dc.identifier.issn2464–4617 (print)
dc.identifier.issn2464–4625 (CD-ROM)
dc.identifier.uriwscg.zcu.cz/WSCG2015/CSRN-2501.pdf
dc.identifier.urihttp://hdl.handle.net/11025/29517
dc.description.abstractDepth-image-based rendering (DIBR) is a view synthesis technique that generates virtual views by warping from the reference images based on depth maps. The quality of synthesized views highly depends on the accuracy of depth maps. However, for dynamic scenarios, depth sequences obtained through stereo matching methods frame by frame can be temporally inconsistent, especially in static regions, which leads to uncomfortable flickering artifacts in synthesized videos. This problem can be eliminated by depth enhancement methods that perform temporal filtering to suppress depth inconsistency, yet those methods may also spread depth errors. Although these depth enhancement algorithms increase the temporal consistency of synthesized videos, they have the risk of reducing the quality of rendered videos. Since conventional methods may not achieve both properties, in this paper, we present for static regions a robust temporal depth enhancement (RTDE) method, which propagates exactly the reliable depth values into succeeding frames to upgrade not only the accuracy but also the temporal consistency of depth estimations. This technique benefits the quality of synthesized videos. In addition we propose a novel evaluation metric to quantitatively compare temporal consistency between our method and the state of arts. Experimental results demonstrate the robustness of our method for dynamic virtual view synthesis, not only the temporal consistency but also the quality of synthesized videos in static regions are improved.en
dc.format10 s.cs
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.publisherVáclav Skala - UNION Agencycs
dc.relation.ispartofseriesWSCG 2015: full papers proceedingsen
dc.rights© Václav Skala - UNION Agencyen
dc.subjectFTVcs
dc.subjectDIBRcs
dc.subjectčasová konzistencecs
dc.subjectzvýšení hloubkycs
dc.titleRobust temporal depth enhancement method for dynamic virtual view synthesisen
dc.typekonferenční příspěvekcs
dc.typeconferenceObjecten
dc.rights.accessopenAccessen
dc.type.versionpublishedVersionen
dc.subject.translatedFTVen
dc.subject.translatedDIBRen
dc.subject.translatedtemporal consistencyen
dc.subject.translateddepth enhancementen
dc.type.statusPeer-revieweden
Vyskytuje se v kolekcích:WSCG 2015: Full Papers Proceedings

Soubory připojené k záznamu:
Soubor Popis VelikostFormát 
Liu.pdfPlný text1,49 MBAdobe PDFZobrazit/otevřít


Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam: http://hdl.handle.net/11025/29517

Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.