Full metadata record
DC poleHodnotaJazyk
dc.contributor.authorDey, Arnab
dc.contributor.authorAhmine, Yassine
dc.contributor.authorComport, Andrew I.
dc.contributor.editorSkala, Václav
dc.date.accessioned2022-08-29T10:15:59Z
dc.date.available2022-08-29T10:15:59Z
dc.date.issued2022
dc.identifier.citationJournal of WSCG. 2022, vol. 30, no. 1-2, p. 34-43.en
dc.identifier.issn1213-6972 (print)
dc.identifier.issn1213-6964 (on-line)
dc.identifier.urihttp://hdl.handle.net/11025/49392
dc.format10 s.cs
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.publisherVáclav Skala - UNION Agencycs
dc.rights© Václav Skala - UNION Agencycs
dc.subjectpočítačové viděnícs
dc.subjectRGB-D NeRFcs
dc.subjectNeRFcs
dc.subjectreprezentace neuronové scénycs
dc.subjectneuronové vykreslovánícs
dc.subjectvykreslování objemucs
dc.titleDepth Assisted Fast Neural Radiance Fieldsen
dc.typečlánekcs
dc.typearticleen
dc.rights.accessopenAccessen
dc.type.versionpublishedVersionen
dc.description.abstract-translatedNeural scene representations, such as Neural Radiance Fields (NeRF), are based on training a multilayer perceptron (MLP) using a set of color images with known poses. An increasing number of devices now produce RGB-D(color + depth) information, which has been shown to be very important for a wide range of tasks. Therefore, the aim of this paper is to investigate what improvements can be made to these promising implicit representations by incorporating depth information with the color images. In particular, the recently proposed Mip-NeRF approach, which uses conical frustums instead of rays for volume rendering, allows one to account for the varying area of a pixel with distance from the camera center. The proposed method additionally models depth uncertainty. This allows to address major limitations of NeRF-based approaches including improving the accuracy of geometry, reduced artifacts, faster training time, and shortened prediction time. Experiments are performed on well-known benchmark scenes, and comparisons show improved accuracy in scene geometry and photometric reconstruction, while reducing the training time by 3 - 5 times.en
dc.subject.translatedcomputer visionen
dc.subject.translatedRGB-D NeRFen
dc.subject.translatedNeRFen
dc.subject.translatedneural scene representationen
dc.subject.translatedneural renderingen
dc.subject.translatedvolume renderingen
dc.identifier.doihttps://www.doi.org/10.24132/JWSCG.2022.5
dc.type.statusPeer-revieweden
Vyskytuje se v kolekcích:Volume 30, Number 1-2 (2021)

Soubory připojené k záznamu:
Soubor Popis VelikostFormát 
B02-full.pdfPlný text10,83 MBAdobe PDFZobrazit/otevřít


Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam: http://hdl.handle.net/11025/49392

Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.