Zhuo Wang

HCI Researcher



GrabXR


My Role: Concept Design, Development 

Motivation:  

Current XR hand grab interactions primarily rely on vision-based hand tracking techniques,  making hand poses unreliable under occlusion and out of vision conditions. This leads to failed grab attempts, unintended releases, and inconsistent interaction feedback.  While researchers have proposed additional wearable devices for hand pose estimation, such solutions affect user experience and increase system cost. 
Our project explores a novel hand grab interaction system  specifically designed for vision-limited environmnet, which can predict users’ reach intentions and estimate hand pose, using object information, arm pose information and other implicit modalities information( like eye gaze), rather than relying on  complete hand image input.   

 

More information coming soon!



Tools
Translate to