Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Vision-based single-stroke character recognition for wearable computing
Download
index.pdf
Date
2001-05-01
Author
Ozer, OF
Ozun, O
Tuzel, CO
Atalay, Mehmet Volkan
Cetin, AE
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
67
views
0
downloads
Cite This
Particularly when compared to traditional tools such as a keyboard or mouse, wearable computing data entry tools offer increased mobility and flexibility. Such tools include touch screens, hand gesture and facial expression recognition, speech recognition, and key systems. We describe a new approach for recognizing characters drawn by hand gestures or by a pointer on a user's forearm captured by a digital camera. We draw each character as a single, isolated stroke using a Graffiti-like alphabet. Our algorithm enables effective and quick character recognition. The resulting character recognition system has potential for application in mobile communication and computing devices such as phones, laptop computers, handheld computers and personal data assistants.
Subject Keywords
Character recognition
,
Wearable computers
,
Handheld computers
,
Speech recognition
,
Keyboards
,
Mice
,
Face recognition
,
Digital cameras
,
Application software
,
Mobile communication
URI
https://hdl.handle.net/11511/48857
Journal
IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS
DOI
https://doi.org/10.1109/5254.940024
Collections
Department of Computer Engineering, Article
Suggestions
OpenMETU
Core
Vision-based human-computer interaction using laser pointer
Erdem, İbrahim Aykut; Atalay, Mehmet Volkan; Department of Computer Engineering (2003)
By the availability of today̕s inexpensive powerful hardware, it becomes possible to design real-time computer vision systems even in personal computers. Therefore, computer vision becomes a powerful tool for human-computer interaction (HCI). In this study, three different vision-based HCI systems are described. As in all vision-based HCI systems, the developed systems requires a camera (a webcam) to monitor the actions of the users. For pointing tasks, laser pointer is used as the pointing device. The firs...
3D hand tracking in video sequences
Tokatlı, Aykut; Halıcı, Uğur; Department of Electrical and Electronics Engineering (2005)
The use of hand gestures provides an attractive alternative to cumbersome interface devices such as keyboard, mouse, joystick, etc. Hand tracking has a great potential as a tool for better human-computer interaction by means of communication in a more natural and articulate way. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures and hand tracking. In this study, a real-time hand tracking system is developed. Mainly, it is image-ba...
Tamper-Resistant Autonomous Agents-Based Mobile-Cloud Computing
Angın, Pelin; Ranchal, Rohit (2016-01-01)
The rise of the mobile-cloud computing paradigm has enabled mobile devices with limited processing power and battery life to achieve complex tasks in real-time. While mobile-cloud computing is promising to overcome limitations of mobile devices for real-time computing needs, the reliance of existing models on strong assumptions such as the availability of a full clone of the application code and non-standard system environments in the cloud makes it harder to manage the performance of mobile-cloud computing...
Comparison of cognitive modeling and user performance analysis for touch screen mobile interface design
Ocak, Nihan; Çağıltay, Kürşat; Department of Information Systems (2014)
The main aim of this thesis is to analyze and comparatively evaluate the usability of touch screen mobile applications through cognitive modeling and end-user usability testing. The study investigates the accuracy of the estimated results cognitive model produces for touch screen mobile phone interfaces. CogTool application was used as the cognitive modeling method. Turkcell Cüzdan application, which is suitable for the implementation of both methods, was chosen as the mobile application. Based on the feedb...
Data-driven image captioning via salient region discovery
Kilickaya, Mert; Akkuş, Burak Kerim; Çakıcı, Ruket; Erdem, Aykut; Erdem, Erkut; İKİZLER CİNBİŞ, NAZLI (Institution of Engineering and Technology (IET), 2017-09-01)
n the past few years, automatically generating descriptions for images has attracted a lot of attention in computer vision and natural language processing research. Among the existing approaches, data-driven methods have been proven to be highly effective. These methods compare the given image against a large set of training images to determine a set of relevant images, then generate a description using the associated captions. In this study, the authors propose to integrate an object-based semantic image r...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
O. Ozer, O. Ozun, C. Tuzel, M. V. Atalay, and A. Cetin, “Vision-based single-stroke character recognition for wearable computing,”
IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS
, pp. 33–37, 2001, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/48857.