Název: On maximum geometric finger-tip recognition distance using depth sensors
Autoři: Shekow, Marius
Oppermann, Leif
Citace zdrojového dokumentu: WSCG 2014: communication papers proceedings: 22nd International Conference in Central Europeon Computer Graphics, Visualization and Computer Visionin co-operation with EUROGRAPHICS Association, p. 83-89.
Datum vydání: 2014
Nakladatel: Václav Skala - UNION Agency
Typ dokumentu: konferenční příspěvek
conferenceObject
URI: wscg.zcu.cz/WSCG2014/!!_2014-WSCG-Communication.pdf
http://hdl.handle.net/11025/26381
ISBN: 978-80-86943-71-8
Klíčová slova: přirozené uživatelské rozhraní;snímač hloubky;rozpoznávání špiček prstů;SwissRanger SR4000;Microsoft Kinect;Kinect for Windows 2 alpha vývojová sada
Klíčová slova v dalším jazyce: natural user interaction;depth sensor;finger-tip recognition;SwissRanger SR4000;Microsoft Kinect;Kinect for Windows 2 alpha development kit
Abstrakt v dalším jazyce: Depth sensor data is commonly used as the basis for Natural User Interfaces (NUI). The recent availability of different camera systems at affordable prices has caused a significant uptake in the research community, e.g. for building hand-pose or gesture-based controls in various scenarios and with different algorithms. The limited resolution and noise of the utilized cameras naturally puts a constraint on the distance between camera and user at which a meaningful interaction can still be designed for. We therefore conducted extensive accuracy experiments to explore the maximum distance that allows for recognizing finger-tips of an average-sized hand using three popular depth cameras (SwissRanger SR4000, Microsoft Kinect for Windows and the Alpha Development Kit of the Kinect for Windows 2), with two geometric algorithms and a manual image analysis. In our experiment, the palm faces the sensors with all five fingers extended. It is moved at distances of 0.5 to 3.5 meters from the sensor. Quantitative data is collected regarding the number of finger-tips recognized in the binary hand outline image for each sensor, using two algorithms. For qualitative analysis, samples of the hand outline are also collected. The quantitative results proved to be inconclusive due to false positives or negatives caused by noise. In turn our qualitative analysis, achieved by inspecting the hand outline images manually, provides conclusive understanding of the depth data quality. We find that recognition works reliably up to 1.5 m (SR4000, Kinect) and 2.4 m (Kinect 2). These insights are generally applicable for designing NUIs that rely on depth sensor data.
Práva: @ Václav Skala - UNION Agency
Vyskytuje se v kolekcích:WSCG 2014: Communication Papers Proceedings

Soubory připojené k záznamu:
Soubor Popis VelikostFormát 
Shekow.pdfPlný text1,43 MBAdobe PDFZobrazit/otevřít


Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam: http://hdl.handle.net/11025/26381

Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.