Show/Hide Menu
Hide/Show Apps
anonymousUser
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Açık Bilim Politikası
Açık Bilim Politikası
Frequently Asked Questions
Frequently Asked Questions
Browse
Browse
By Issue Date
By Issue Date
Authors
Authors
Titles
Titles
Subjects
Subjects
Communities & Collections
Communities & Collections
"Read That Article": Exploring Synergies between Gaze and Speech Interaction
Date
2015-10-28
Author
Vieira, Diogo
Freitas, Joao Dinis
Acartürk, Cengiz
Teixeira, Antonio
Sousa, Luis
Silva, Samuel
Candeias, Sara
Dias, Miguel Sales
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
3
views
0
downloads
Gaze information has the potential to benefit Human-Computer Interaction (HCI) tasks, particularly when combined with speech. Gaze can improve our understanding of the user intention, as a secondary input modality, or it can be used as the main input modality by users with some level of permanent or temporary impairments. In this paper we describe a multimodal HCI system prototype which supports speech, gaze and the combination of both. The system has been developed for Active Assisted Living scenarios.
Subject Keywords
Multimodal
,
Gaze
,
Speech
,
Fusion
,
Social and professional topics
,
Computing profession
,
User characteristics
,
People with disabilities
,
Assistive technologies
URI
https://hdl.handle.net/11511/31172
DOI
https://doi.org/10.1145/2700648.2811369
Collections
Graduate School of Informatics, Conference / Seminar