Full metadata record
DC poleHodnotaJazyk
dc.contributor.authorShekow, Marius
dc.contributor.authorOppermann, Leif
dc.contributor.editorSkala, Václav
dc.date.accessioned2017-10-10T07:41:09Z
dc.date.available2017-10-10T07:41:09Z
dc.date.issued2014
dc.identifier.citationWSCG 2014: communication papers proceedings: 22nd International Conference in Central Europeon Computer Graphics, Visualization and Computer Visionin co-operation with EUROGRAPHICS Association, p. 83-89.en
dc.identifier.isbn978-80-86943-71-8
dc.identifier.uriwscg.zcu.cz/WSCG2014/!!_2014-WSCG-Communication.pdf
dc.identifier.urihttp://hdl.handle.net/11025/26381
dc.format7 s.cs
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.publisherVáclav Skala - UNION Agencycs
dc.relation.ispartofseriesWSCG 2014: communication papers proceedingsen
dc.rights@ Václav Skala - UNION Agencycs
dc.subjectpřirozené uživatelské rozhranícs
dc.subjectsnímač hloubkycs
dc.subjectrozpoznávání špiček prstůcs
dc.subjectSwissRanger SR4000cs
dc.subjectMicrosoft Kinectcs
dc.subjectKinect for Windows 2 alpha vývojová sadacs
dc.titleOn maximum geometric finger-tip recognition distance using depth sensorsen
dc.typekonferenční příspěvekcs
dc.typeconferenceObjecten
dc.rights.accessopenAccessen
dc.type.versionpublishedVersionen
dc.description.abstract-translatedDepth sensor data is commonly used as the basis for Natural User Interfaces (NUI). The recent availability of different camera systems at affordable prices has caused a significant uptake in the research community, e.g. for building hand-pose or gesture-based controls in various scenarios and with different algorithms. The limited resolution and noise of the utilized cameras naturally puts a constraint on the distance between camera and user at which a meaningful interaction can still be designed for. We therefore conducted extensive accuracy experiments to explore the maximum distance that allows for recognizing finger-tips of an average-sized hand using three popular depth cameras (SwissRanger SR4000, Microsoft Kinect for Windows and the Alpha Development Kit of the Kinect for Windows 2), with two geometric algorithms and a manual image analysis. In our experiment, the palm faces the sensors with all five fingers extended. It is moved at distances of 0.5 to 3.5 meters from the sensor. Quantitative data is collected regarding the number of finger-tips recognized in the binary hand outline image for each sensor, using two algorithms. For qualitative analysis, samples of the hand outline are also collected. The quantitative results proved to be inconclusive due to false positives or negatives caused by noise. In turn our qualitative analysis, achieved by inspecting the hand outline images manually, provides conclusive understanding of the depth data quality. We find that recognition works reliably up to 1.5 m (SR4000, Kinect) and 2.4 m (Kinect 2). These insights are generally applicable for designing NUIs that rely on depth sensor data.en
dc.subject.translatednatural user interactionen
dc.subject.translateddepth sensoren
dc.subject.translatedfinger-tip recognitionen
dc.subject.translatedSwissRanger SR4000en
dc.subject.translatedMicrosoft Kinecten
dc.subject.translatedKinect for Windows 2 alpha development kiten
dc.type.statusPeer-revieweden
Vyskytuje se v kolekcích:WSCG 2014: Communication Papers Proceedings

Soubory připojené k záznamu:
Soubor Popis VelikostFormát 
Shekow.pdfPlný text1,43 MBAdobe PDFZobrazit/otevřít


Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam: http://hdl.handle.net/11025/26381

Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.