Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
"Read That Article": Exploring Synergies between Gaze and Speech Interaction
Date
2015-10-28
Author
Vieira, Diogo
Freitas, Joao Dinis
Acartürk, Cengiz
Teixeira, Antonio
Sousa, Luis
Silva, Samuel
Candeias, Sara
Dias, Miguel Sales
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
166
views
0
downloads
Cite This
Gaze information has the potential to benefit Human-Computer Interaction (HCI) tasks, particularly when combined with speech. Gaze can improve our understanding of the user intention, as a secondary input modality, or it can be used as the main input modality by users with some level of permanent or temporary impairments. In this paper we describe a multimodal HCI system prototype which supports speech, gaze and the combination of both. The system has been developed for Active Assisted Living scenarios.
Subject Keywords
Multimodal
,
Gaze
,
Speech
,
Fusion
,
Social and professional topics
,
Computing profession
,
User characteristics
,
People with disabilities
,
Assistive technologies
URI
https://hdl.handle.net/11511/31172
DOI
https://doi.org/10.1145/2700648.2811369
Collections
Graduate School of Informatics, Conference / Seminar
Suggestions
OpenMETU
Core
A Multi-perspective Analysis of Social Context and Personal Factors in Office Settings for the Design of an Effective Mobile Notification System
ÇAVDAR, ŞEYMA; Taşkaya Temizel, Tuğba; Musolesi, Mirco; Tino, Peter (2020-03-01)
In this study, we investigate the effects of social context, personal and mobile phone usage on the inference of work engagement/challenge levels of knowledge workers and their responsiveness to well-being related notifications. Our results show that mobile application usage is associated to the responsiveness and work engagement/challenge levels of knowledge workers. We also developed multi-level (within- and between-subjects) models for the inference of attentional states and engagement/challenge levels w...
Trust attribution in collaborative robots: An experimental investigation of non-verbal cues in a virtual human-robot interaction setting
Ahmet Meriç, Özcan; Şahin, Erol; Acartürk, Cengiz; Department of Bioinformatics (2021-6)
This thesis reports the development of non-verbal HRI (Human-Robot Interaction) behaviors on a robotic manipulator, evaluating the role of trust in collaborative assembly tasks. Towards this end, we developed four non-verbal HRI behaviors, namely gazing, head nodding, tilting, and shaking, on a UR5 robotic manipulator. We used them under different degrees of trust of the user to the robot actions. Specifically, we used a certain head-on neck posture for the cobot using the last three links along with the gr...
Gestures production under instructional context The role of mode of instruction
Melda, Coşkun; Acartürk, Cengiz (Cognitive Science Society ; 2015-09-25)
We aim at examining how communication mode influences the production of gestures under specific contextual environments. Twenty-four participants were asked to present a topic of their choice under three instructional settings: a blackboard, paper-and-pencil, and a tablet. Participants’ gestures were investigated in three groups: deictic gestures that point to entities, representational gestures that present picturable aspects of semantic content, and beat gestures that are speech-related rhythmic hand move...
The IRIS Project A liaison between industry and academia towards natural multimodal communication
Freıtas, Joao; Sara, Candeıas; Mıguel, Sales Dıas; Eduardo, Lleıda; Alfonso, Ortega; Antonıo, Teıxeıra; Samuel, Sılva; Acartürk, Cengiz; Veronıca, Orvalho (null; 2014-11-30)
his paper describes a project with the overall goal of providing a natural interaction communication platform accessible and adapted for all users, especially for people with speech impairments and elderly, by sharing knowledge between Industry and Academia. The platform will adopt the princi-ples of natural user interfaces such as speech, silent speech, gestures, picto-grams, among others, and will provide a set of services that allow easy access to social networks, friends and remote family members, thus ...
Wireless speech recognition using fixed point mixed excitation linear prediction (MELP) vocoder
Acar, D; Karci, MH; Ilk, HG; Demirekler, Mübeccel (2002-07-19)
A bit stream based front-end for wireless speech recognition system that operates on fixed point mixed excitation linear prediction (MELP) vocoder is presented in this paper. Speaker dependent, isolated word recognition accuracies obtained from conventional and bit stream based front-end systems are obtained and their statistical significance is discussed. Feature parameters are extracted from original (wireline) and decoded speech (conventional) and from the quantized spectral information (bit stream) of t...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
D. Vieira et al., ““Read That Article”: Exploring Synergies between Gaze and Speech Interaction,” 2015, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/31172.