Název: Autograsping Pose of Virtual Hand Model Using the Signed Distance Field Real-time Sampling with Fine-tuning
Autoři: Puchalski, Marcin
Wozna-Szczenia, Bozena
Citace zdrojového dokumentu: WSCG 2023: full papers proceedings: 1. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, p. 232-240.
Datum vydání: 2023
Nakladatel: Václav Skala - UNION Agency
Typ dokumentu: konferenční příspěvek
conferenceObject
URI: http://hdl.handle.net/11025/54429
ISBN: 978-80-86943-32-9
ISSN: 2464–4617 (print)
2464–4625 (CD/DVD)
Klíčová slova: uchopení;virtuální realita;vizualizace;interakce;animace
Klíčová slova v dalším jazyce: grasping;virtual reality;visualization;interaction;animation
Abstrakt v dalším jazyce: Virtual hands have a wide range of applications, including education, medical simulation, training, animation, and gaming. In education and training, they can be used to teach complex procedures or simulate realistic scenarios. This extends to medical training and therapy to simulate real-life surgical procedures and physical rehabilitation exercises. In animation, they can be used to generate believable pre-computed or real-time hand poses and grasp ing animations. In games, they can be used to control virtual objects and perform actions such as shooting a gun or throwing a ball. In consumer-grade VR setups, virtual hand manipulation is usually approximated by employing controller button states, which can result in unnatural final hand positions. One solution to this problem is the use of pre-recorded hand poses or auto-grasping using physics-based collision detection. However, this approach has limitations, such as not taking into account non-convex parts of objects, and can have a significant impact on per formance. In this paper, we propose a new approach that utilizes a snapshot of the Signed Distance Field (SDF) of the area below the user’s hand during the grab action. By sampling this 3D matrix during the finger-bending phase, we obtain information about the distance of each finger part to the object surface. Comparing our solution to those based on discrete collision detection shows better visual results and significantly less computational impact.
Práva: © Václav Skala - UNION Agency
Vyskytuje se v kolekcích:WSCG 2023: Full Papers Proceedings

Soubory připojené k záznamu:
Soubor Popis VelikostFormát 
F97-full.pdfPlný text10,85 MBAdobe PDFZobrazit/otevřít


Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam: http://hdl.handle.net/11025/54429

Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.