Full metadata record
DC poleHodnotaJazyk
dc.contributor.authorKhatri, Nilay
dc.contributor.authorJoshi, Manjunath V.
dc.contributor.editorSkala, Václav
dc.date.accessioned2015-02-02T06:28:30Z-
dc.date.available2015-02-02T06:28:30Z-
dc.date.issued2014
dc.identifier.citationWSCG 2014: Full Papers Proceedings: 22nd International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision in co-operation with EUROGRAPHICS Association, p. 1-8.en
dc.identifier.isbn978-80-86943-74-9
dc.identifier.urihttp://wscg.zcu.cz/WSCG2013/!_2013-WSCG-Full-proceedings.pdf
dc.identifier.urihttp://hdl.handle.net/11025/11909
dc.description.abstractExploiting similarity of patches within multiple resolution versions of an image is often utilized to solve many vision problems. Particularly, for image upsampling, recently, there has been a slew of algorithms exploiting patch repetitions within- and across- different scales of an image, along with some priors to preserve the scene structure of the reconstructed image. One such method, self-learning algorithm [1], uses only one image to achieve high magnification factors. But, as the image resolution increases, the number of patches in dictionary increases dramatically, and makes the reconstruction computationally prohibitive. In this paper, we propose a method that removes the redundancies inherent in large self-learned dictionaries to upsample an image without using any regularization methods or priors, and drastically reduces time complexity. We further prove that any low-variance (low details) patch that does not find any match can be represented as a linear combination of only low-variance patches from dictionary. The same principle applies to high-variance (high details) patches. Images with high scaling factors can be obtained with this method without any regularization or prior information, which can be subjected to further regularization with necessary prior(s) to refine the reconstruction.en
dc.format8 s.cs
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.publisherVáclav Skala - UNION Agencycs
dc.relation.ispartofseriesWSCG 2014: Full Papers Proceedingsen
dc.rights© Václav Skala - UNION Agencyen
dc.subjectsebeučenícs
dc.subjectpřevzorkování obrazucs
dc.subjectsuper rozlišenícs
dc.subjectslovníkové učenícs
dc.titleEfficient Self-learning for Single Image Upsamplingen
dc.typekonferenční příspěvekcs
dc.typeconferenceObjecten
dc.rights.accessopenAccessen
dc.type.versionpublishedVersionen
dc.subject.translatedself-learningen
dc.subject.translatedimage upsamplingen
dc.subject.translatedsuper resolutionen
dc.subject.translateddictionary learningen
dc.type.statusPeer-revieweden
Vyskytuje se v kolekcích:WSCG 2014: Full Papers Proceedings

Soubory připojené k záznamu:
Soubor Popis VelikostFormát 
Khatri.pdfPlný text22,33 MBAdobe PDFZobrazit/otevřít


Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam: http://hdl.handle.net/11025/11909

Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.