Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Speech Driven Gaze in a Face-to-Face Interaction
Date
2021-03-01
Author
Arslan Aydin, Ulku
Kalkan, Sinan
Acartürk, Cengiz
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
222
views
0
downloads
Cite This
Gaze and language are major pillars in multimodal communication. Gaze is a non-verbal mechanism that conveys crucial social signals in face-to-face conversation. However, compared to language, gaze has been less studied as a communication modality. The purpose of the present study is 2-fold: (i) to investigate gaze direction (i.e., aversion and face gaze) and its relation to speech in a face-to-face interaction; and (ii) to propose a computational model for multimodal communication, which predicts gaze direction using high-level speech features. Twenty-eight pairs of participants participated in data collection. The experimental setting was a mock job interview. The eye movements were recorded for both participants. The speech data were annotated by ISO 24617-2 Standard for Dialogue Act Annotation, as well as manual tags based on previous social gaze studies. A comparative analysis was conducted by Convolutional Neural Network (CNN) models that employed specific architectures, namely, VGGNet and ResNet. The results showed that the frequency and the duration of gaze differ significantly depending on the role of participant. Moreover, the ResNet models achieve higher than 70% accuracy in predicting gaze direction.
URI
https://hdl.handle.net/11511/89847
Journal
FRONTIERS IN NEUROROBOTICS
DOI
https://doi.org/10.3389/fnbot.2021.598895
Collections
Department of Computer Engineering, Article
Suggestions
OpenMETU
Core
Signals of understanding in multilingual communication : a cross-linguistic functional-pragmatic analysis of interjections
Akkuş, Mehmet; Daloğlu, Ayşegül; Department of English Language Teaching (2013)
The main objective of this study is to investigate and find out the contribution of interjections as indicators of understanding in an Azerbaijani-Turkish Lingua Receptiva (LaRa) communication within the framework of Functional Pragmatics. The data utilized in this study were collected by video recording four Turkish and two university Azerbaijani native speakers who had paired each other and played a world famous guessing game Taboo. The length of data obtained from these recordings is circa two hours. The...
Linguistic Reflections on Psychotherapy: Change in Usage of The First Person Pronoun in Information Structure Positions
Demiray, Cigdem Kose; Gençöz, Tülin (2018-08-01)
Aim of present study was to understand changes in speech of clients with regard to certain linguistic features from 5th to 15th session of psychotherapy. First person pronoun use in information structure positions were analyzed in speech of clients. Participants of this study were 11 psychotherapists (clinical psychology master and doctorate students) and 16 clients (applicants to AYNA Psychotherapy Unit). In present study word count results of clinets' speeches were analyzed by ANOVA method. According to r...
Vocal synchrony as a coregulation indicator of attachment bonds
Harma, Mehmet; Sümer, Nebi; Hazan, Cynthia; Department of Psychology (2014)
This dissertation aims to explore the concept of coregulation in adulthood based on the analyses of vocal cues in conversations. Moderators which potentially affect vocal coordination between romantic partners were also examined. Twenty-four heterosexual dating couples (Mage = 21.25; SD = 1.03) from Cornell University were recruited for the Study-1. Participants communicated with their romantic and stranger partner in a balanced order. Their conversations were recorded and vocal features were extracted. Gra...
Emotion analysis of Turkish texts by using machine learning methods
Boynukalın, Zeynep; Karagöz, Pınar; Department of Computer Engineering (2012)
Automatically analysing the emotion in texts is in increasing interest in today’s research fields. The aim is to develop a machine that can detect type of user’s emotion from his/her text. Emotion classification of English texts is studied by several researchers and promising results are achieved. In this thesis, an emotion classification study on Turkish texts is introduced. To the best of our knowledge, this is the first study on emotion analysis of Turkish texts. In English there exists some well-defined...
Perceived benefits of three-way observation on the focal areas of abjectives of the activities, error-correction techniques and group-work in study conducted in an upper-intermediate class at Bilkent University school of English Language
Uçan, Bengü Yurtseven; Daloğlu, Ayşegül; Department of English Language Teaching (2004)
This study aims to explore the perceived benefits of three-way observation on the focal areas of objectives of the activities, error-correction techniques, and group-work in an upper-intermediate class in Bilkent University School of English Language. The data was collected through five classroom observations, six post-observation reflection sheets, five focus-group interviews with the students, and five post-observation interviews with the observer. A total of 15 upper-intermediate level students, one teac...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
U. Arslan Aydin, S. Kalkan, and C. Acartürk, “Speech Driven Gaze in a Face-to-Face Interaction,”
FRONTIERS IN NEUROROBOTICS
, pp. 0–0, 2021, Accessed: 00, 2021. [Online]. Available: https://hdl.handle.net/11511/89847.