Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
A Low cost learning based sign language recognition system
Download
index.pdf
Date
2018
Author
Akış, Abdullah Hakan
Metadata
Show full item record
Item Usage Stats
242
views
117
downloads
Cite This
Sign Language Recognition (SLR) is an active area of research due to its important role in Human Computer Interaction (HCI). The aim of this work is to automatically recognize hand gestures consisting of the movement of hand, arm and fingers. To achieve this, we studied two different approaches, namely feature based recognition and Convolutional Neural Networks (CNN) based recognition. The first approach is based on segmentation, feature extraction and classification whereas the second one is based on segmentation and CNN which learns the signs from the image itself. In order to calculate the recognition rate of the systems, tests are conducted using eNTERFACE dataset of 8 American Sign Language (ASL) signs. Detailed analysis is done to evaluate each step of both approaches. Experimental results show that the feature based SLR system and CNN based SLR system achieved recognition rate of 95.31% and 93.12%, respectively. Experimental results also show that CNN based SLR system achieved recognition rate of 94.29% when data augmentation is used to increase the training dataset.
Subject Keywords
Gesture.
,
Sign language.
,
Human-robot interaction.
,
Human-computer interaction.
,
Neural networks (Computer science).
URI
http://etd.lib.metu.edu.tr/upload/12622877/index.pdf
https://hdl.handle.net/11511/27877
Collections
Graduate School of Natural and Applied Sciences, Thesis
Suggestions
OpenMETU
Core
A smart couch design for improving the quality of life of the patients with cognitive diseases Bi̇li̇şsel rahatsizliklari olan ki̇şi̇ leri̇n yaşam kali̇tesi̇ni̇ artirmak i̇çi̇n akilli bi̇r koltuk tasarimi
Ertan, Halil; Alemdar, Hande; Incel, Özlem Durmaz; Ersoy, Cem (2012-07-09)
In this paper, we focus on the human activity recognition module of a homecare system that consists of wireless sensors developed for remotely monitoring patients with cognitive disorders, such as Alzheimer. To this end, as an initial study, we designed a smart couch that is equipped with accelerometer, vibration and force resistive sensors to monitor how much time people spend while sitting, lying or napping on the couch and to recognize the drifts from their daily routines. In order to distinguish these a...
A Flexible and Scalable Audio Information Retrieval System for Mixed-Type Audio Signals
Dogan, Ebru; SERT, MUSTAFA; Yazıcı, Adnan (Wiley, 2011-10-01)
The content-based classification and retrieval of real-world audio clips is one of the challenging tasks in multimedia information retrieval. Although the problem has been well studied in the last two decades, most of the current retrieval systems cannot provide flexible querying of audio clips due to the mixed-type form (e.g., speech over music and speech over environmental sound) of audio information in real world. We present here a complete, scalable, and extensible content-based classification and retri...
A Method for isolated sign recognition with KINECT
Işıklıgil, Emre; Toroslu, İsmail Hakkı; Department of Computer Engineering (2014)
Although there are various studies on sign language recognition (SLR), most of them use accessories like coloured gloves and accelerometers for data acquisition or require complex environmental setup to operate. In my thesis, I will use only Microsoft Kinect sensor for acquiring data for SLR. Kinect lets us obtain 3D positions of the body joints in real time without the help of any other device. After an isolated sign is captured, paths of the discriminative body joints are extracted. Then, a vector consist...
Human motion analysis via axis based representations
Erdem, Sezen; Tarı, Zehra Sibel; Department of Computer Engineering (2007)
Visual analysis of human motion is one of the active research areas in computer vision. The trend shifts from computing motion fields to understanding actions. In this thesis, an action coding scheme based on trajectories of the features calculated with respect to a part based coordinate system is presented. The part based coordinate system is formed using an axis based representation. The features are extracted from images segmented in the form of silhouettes. We present some preliminary experiments that d...
A remote sensing computer-assisted learning tool developed using the unified modeling language
Friedrich, J; Karslıoğlu, Mahmut Onur (Elsevier BV, 2004-07-01)
The goal of this work has been to create an easy-to-use and simple-to-make learning tool for remote sensing at an introductory level. Many students struggle to comprehend what seems to be a very basic knowledge of digital images, image processing and image arithmetic, for example. Because professional programs are generally too complex and overwhelming for beginners and often not tailored to the specific needs of a course regarding functionality, a computer-assisted learning (CAL) program was developed base...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
A. H. Akış, “A Low cost learning based sign language recognition system,” M.S. - Master of Science, Middle East Technical University, 2018.