Title: Autograsping Pose of Virtual Hand Model Using the Signed Distance Field Real-time Sampling with Fine-tuning
Authors: Puchalski, Marcin
Wozna-Szczenia, Bozena
Citation: WSCG 2023: full papers proceedings: 1. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, p. 232-240.
Issue Date: 2023
Publisher: Václav Skala - UNION Agency
Document type: konferenční příspěvek
conferenceObject
URI: http://hdl.handle.net/11025/54429
ISBN: 978-80-86943-32-9
ISSN: 2464–4617 (print)
2464–4625 (CD/DVD)
Keywords: uchopení;virtuální realita;vizualizace;interakce;animace
Keywords in different language: grasping;virtual reality;visualization;interaction;animation
Abstract in different language: Virtual hands have a wide range of applications, including education, medical simulation, training, animation, and gaming. In education and training, they can be used to teach complex procedures or simulate realistic scenarios. This extends to medical training and therapy to simulate real-life surgical procedures and physical rehabilitation exercises. In animation, they can be used to generate believable pre-computed or real-time hand poses and grasp ing animations. In games, they can be used to control virtual objects and perform actions such as shooting a gun or throwing a ball. In consumer-grade VR setups, virtual hand manipulation is usually approximated by employing controller button states, which can result in unnatural final hand positions. One solution to this problem is the use of pre-recorded hand poses or auto-grasping using physics-based collision detection. However, this approach has limitations, such as not taking into account non-convex parts of objects, and can have a significant impact on per formance. In this paper, we propose a new approach that utilizes a snapshot of the Signed Distance Field (SDF) of the area below the user’s hand during the grab action. By sampling this 3D matrix during the finger-bending phase, we obtain information about the distance of each finger part to the object surface. Comparing our solution to those based on discrete collision detection shows better visual results and significantly less computational impact.
Rights: © Václav Skala - UNION Agency
Appears in Collections:WSCG 2023: Full Papers Proceedings

Files in This Item:
File Description SizeFormat 
F97-full.pdfPlný text10,85 MBAdobe PDFView/Open


Please use this identifier to cite or link to this item: http://hdl.handle.net/11025/54429

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.