Full metadata record
DC FieldValueLanguage
dc.contributor.authorShekhar, Sumit
dc.contributor.authorReimann, Max
dc.contributor.authorWattaseril, Jobin Idiculla
dc.contributor.authorSemmo, Amir
dc.contributor.authorDöllner, Jürgen
dc.contributor.authorTrapp, Matthias
dc.contributor.editorSkala, Václav
dc.date.accessioned2023-10-03T14:44:07Z
dc.date.available2023-10-03T14:44:07Z
dc.date.issued2023
dc.identifier.citationJournal of WSCG. 2023, vol. 31, no. 1-2, p. 11-24.en
dc.identifier.issn1213 – 6972 (hard copy)
dc.identifier.issn1213 – 6980 (CD-ROM)
dc.identifier.issn1213 – 6964 (on-line)
dc.identifier.urihttp://hdl.handle.net/11025/54280
dc.format14 s.cs
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.publisherVáclav Skala - UNION Agencyen
dc.rights© Václav Skala - UNION Agencyen
dc.subjectreálný čascs
dc.subjectinteraktivnícs
dc.subjectslabé světlocs
dc.subjectobrazcs
dc.subjectvideocs
dc.subjectzvýšenícs
dc.titleALIVE: Adaptive-Chromaticity for Interactive Low-light Image and Video Enhancementen
dc.typečlánekcs
dc.typearticleen
dc.rights.accessopenAccessen
dc.type.versionpublishedVersionen
dc.description.abstract-translatedImage acquisition in low-light conditions suffers from poor quality and significant degradation in visual aesthetics. This affects the visual perception of the acquired image and the performance of computer vision and image pro cessing algorithms applied after acquisition. Especially for videos, the additional temporal domain makes it more challenging, wherein quality is required to be preserved in a temporally coherent manner. We present a simple yet effective approach for low-light image and video enhancement. To this end, we introduce Adaptive Chromaticity, which refers to an adaptive computation of image chromaticity. The above adaptivity avoids the costly step of low-light image decomposition into illumination and reflectance, employed by many existing techniques. Subse quently, we achieve interactive performance, even for high resolution images. Moreover, all stages in our method consists of only point-based operations and high-pass or low-pass filtering, thereby ensuring that the amount of temporal incoherence is negligible when applied on a per-frame basis for videos. Our results on standard low-light image datasets show the efficacy of our method and its qualitative and quantitative superiority over several state of-the-art approaches. We perform a user study to demonstrate the preference for our method in comparison to state-of-the-art approaches for videos captured in the wild.en
dc.subject.translatedreal-timeen
dc.subject.translatedinteractiveen
dc.subject.translatedlow-lighten
dc.subject.translatedimageen
dc.subject.translatedvideoen
dc.subject.translatedenhancementen
dc.identifier.doihttps://www.doi.org/10.24132/JWSCG.2023.2
dc.type.statusPeer-revieweden
Appears in Collections:Volume 31, Number 1-2 (2023)

Files in This Item:
File Description SizeFormat 
!_2023-Journal_WSCG-21-34.pdfPlný text5,73 MBAdobe PDFView/Open


Please use this identifier to cite or link to this item: http://hdl.handle.net/11025/54280

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.