Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
A gaze-centered multimodal approach to human-human social interaction
Date
2017-06-23
Author
Acartürk, Cengiz
Kalkan, Sinan
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
135
views
0
downloads
Cite This
This study aims at investigating gaze aversion behavior in human-human dyads during the course of a conversation. Our goal is to identify the parametric infrastructure, which will underlie the development of gaze behavior in Human Robot Interaction. We employed a job interview setting, where pairs (an interviewer and an interviewee) conducted mock job interviews. Three pairs of native speakers took part in the experiment. Two eye-tracking glasses recorded the scene video, the audio and the eye gaze positions of the participants. The analyses involved synchronization of multimodal data, including video recording data for face tracking, gaze data from the eye trackers, and the audio data for speech segmentation. We investigated frequency, duration, timing and spatial positions of gaze aversions relative to interlocutor's face. The results revealed that the interviewees perform more frequent gaze aversion compared to the interviewers. Moreover, gaze aversion takes longer when accompanied by speech. Also, specific speech instances, such as pause and speech-end signals have significant impact on gaze aversion behavior.
Subject Keywords
Gaze aversion
,
Mobile eye tracking
,
Job interview task
URI
https://hdl.handle.net/11511/30991
DOI
https://doi.org/10.1109/cybconf.2017.7985753
Conference Name
3rd IEEE International Conference on Cybernetics (CYBCONF)
Collections
Graduate School of Informatics, Conference / Seminar
Suggestions
OpenMETU
Core
A multi-group analysis of the effects of individual differences in mindfulness on nomophobia
Arpaci, Ibrahim; Baloğlu, Mustafa; Kesici, Şahin (2019-03-01)
This study aimed to investigate the impact of individual differences in mindfulness on nomophobia. We developed and validated two structural models to identify the relationship between mindfulness and nomophobia. The 'Nomophobia Questionnaire' and the 'Mindful Attention Awareness Scale' were used to obtain data from the subjects. One-way MANOVA results suggested a statistically significant difference in nomophobia based on higher versus lower mindfulness. Further, a multi-group analysis was conducted to tes...
A gaze-centered multimodal approach to face-to-face interaction
Arslan Aydın, Ülkü; Acartürk, Cengiz; Department of Cognitive Sciences (2020)
Face-to-face conversation implies that interaction should be characterized as an inherently multimodal phenomenon involving both verbal and nonverbal signals. Gaze is a nonverbal cue that plays a key role in achieving social goals during the course of conversation. The purpose of this study is twofold: (i) to examine gaze behavior (i.e., aversion and gaze on face) and relations between gaze and speech in face to face interaction, (ii) to construct computational models to predict gaze behavior using high-lev...
A design-based research on shared metacognition through the community of inquiry framework in online collaborative learning environments
Ataş, Amine Hatun; Yıldırım, Zahide; Department of Computer Education and Instructional Technology (2021-2-02)
This study advances the emerging research on shared metacognition through the lens of the Community of Inquiry Framework. It seeks components and utterances of the community of inquiry and shared metacognition in online collaborative learning environments to bring to the fore an instructional design model and instructional design principles. A three-cycle Design-Based Research method was followed in two cases of university students (associate degree and graduate degree) by triangulating quantitative and qua...
Tracing users' behaviors in a multimodal instructional material: An eye-tracking study
Yecan, Esra; Sumuer, Evren; Baran, Bahar; Çağıltay, Kürşat (2007-07-27)
This study aims to explore user behaviors in instructional environments combining multimodal presentation of information. Cognitive load theory and dual coding theory were taken as the theoretical perspectives for the analyses. For this purpose, user behaviors were analyzed by recording participants' eye movements while they were using an instructional material with synchronized video and PowerPoint slides. 15 participants' eye fixation counts and durations for specific parts of the material were collected....
A social psychological examination of actions concerning poverty
Fidan, Merve; Cingöz Ulu, Banu; Department of Psychology (2022-2-2)
In the field of intergroup prosocial behavior, which has recently begun to be included in the field of social psychology, studies examining actions concerning poverty have also recently begun. However, both in the case of Turkey and for the field as a whole, there is still much to be examined from the perspective of intergroup prosociality regarding poverty. Three studies were conducted to examine the role of ideological orientations, causal attributions of poverty and emotions on poverty-related actions. T...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
C. Acartürk and S. Kalkan, “A gaze-centered multimodal approach to human-human social interaction,” presented at the 3rd IEEE International Conference on Cybernetics (CYBCONF), Exeter, ENGLAND, 2017, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/30991.