This typically involves three key-loop steps: First, the system senses and tracks the current environment (including scenes and objects) and user multimodal inputs, and converts these into digital information. Second, AI comprehends and reasons about this digital information. Third, the system adaptively determines when to intervene (timing), how to present information (multisensory output), what content to deliver, and where to place the assistance in the user's environment. This closed-loop process enables users to receive on-demand, timely, and context-aware support throughout their interaction experience to improve their perception and cognition capabilities.
I am especially interested in XR as a mediating layer for Human-AI interaction: multimodal interaction enables precise intent expression and configurable AI boundaries (agents’ role, involvement degree etc.), and XR's spatial feature allows AI to manifest in diverse forms (avatar or tool) and make its reasoning visible.
Michael Nebeling and Janet Johnson at University of Michigan; Brennan Jones at XJTLU.
Projects
This project aims to explore the potential of using a VR data story to raise people’s situation awareness of health risks
This project aims to understand the promises and challenges of experiencing and curating exhibitions in VR
Interaction Technology of Situated Visualization in Public Environment
This project explores socially acceptable spatial interaction design for situated visualization
Navigation Technique in Multi Scale Environment
This project focuses on the design and evaluation of a unified multi-scale navigation user interface to help users quickly understand spatial and hierarchical information in multi-scale virtual environments
A Streaming Gesture Recognition Framework
This project aims to develop a gesture recognitrion framework in resource-constrained scenarios
Design of Mixed Reality Systems to Enrich the Beverage Experience
This project focuses on a adaptive multisensory drinking system
Investigating Embodied Conversation Agent to reduce LLM hallucination in Virtual Reality Education.
This project explores different AI hallucination-aware cues design for embodied conversational agents
This project aims to provide a personalized and interative learning and responsive teaching experiense
This project explores how to use MR as a medium to regulate AI agent in collaboration work.
Publications
Zhuo Wang, ..., Wolfgang Stuerzlinger
Duo Streamers: A Streaming Gesture Recognition Framework
Boxuan Zhu, Sicheng Yang, Zhuo Wang, Haining Liang, Junxiao Shen
2025
Qian Zhu, Zhuo Wang, Wei Zeng, Wai Tong, Weiyue Lin, Xiaojuan Ma
CHI '24, Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA, 2024
Qian Zhu, Linping Yuan, Zian Xu, Leni Yang, Meng Xia, Zhuo Wang, Hai-Ning Liang, Xiaojuan Ma
International Journal of Human-Computer Studies, vol. 181, 2024, p. 103137
DreamVR: Curating an Interactive Exhibition in Social VR Through an Autobiographical Design Study
Jiaxun Cao, Qingyang He, Zhuo Wang, RAY LC, Xin Tong
CHI '23, Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA, 2023