Comparison of multimodal interactions in perspective-corrected multi-display environment

Ryo Fukazawa, Kazuki Takashima, Garth Shoemaker, Yoshifumi Kitamura, Yuichi Itoh, Fumio Kishino

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

7 Citations (Scopus)

Abstract

This paper compares multi-modal interaction techniques in a perspective-corrected multi-display environment (MDE). The performance of multimodal interactions using gestures, eye gaze, and head direction are experimentally examined in an object manipulation task in MDEs and compared with a mouse operated perspective cursor. Experimental results showed that gesture-based multimodal interactions provide performance equivalent in task completion time to mouse-based perspective cursors. A technique utilizing user head direction received positive comments from subjects even though it was not as fast. Based on the experimental results and observations, we discuss the potential of multimodal interaction techniques in MDEs.

Original languageEnglish
Title of host publication3DUI 2010 - IEEE Symposium on 3D User Interfaces 2010, Proceedings
Pages103-110
Number of pages8
DOIs
Publication statusPublished - 2010
EventIEEE Symposium on 3D User Interfaces 2010, 3DUI 2010 - Waltham, MA, United States
Duration: 2010 Mar 202010 Mar 21

Publication series

Name3DUI 2010 - IEEE Symposium on 3D User Interfaces 2010, Proceedings

Conference

ConferenceIEEE Symposium on 3D User Interfaces 2010, 3DUI 2010
Country/TerritoryUnited States
CityWaltham, MA
Period10/3/2010/3/21

Keywords

  • Gaze
  • Gestural interaction
  • H5.2 [Information interfaces and presentation]: User Interfaces - Graphical user interfaces
  • Multi-display environments
  • Perspective-aware interfaces
  • Pointing

Fingerprint

Dive into the research topics of 'Comparison of multimodal interactions in perspective-corrected multi-display environment'. Together they form a unique fingerprint.

Cite this