TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

Similar documents
Real Time Sign Language Processing System

N RISCE 2K18 ISSN International Journal of Advance Research and Innovation

Recognition of Tamil Sign Language Alphabet using Image Processing to aid Deaf-Dumb People

Gesture Recognition using Marathi/Hindi Alphabet

Finger spelling recognition using distinctive features of hand shape

Implementation of image processing approach to translation of ASL finger-spelling to digital text

Hand Gesture Recognition: Sign to Voice System (S2V)

IDENTIFICATION OF REAL TIME HAND GESTURE USING SCALE INVARIANT FEATURE TRANSFORM

Extraction of Blood Vessels and Recognition of Bifurcation Points in Retinal Fundus Image

Automated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature

NAILFOLD CAPILLAROSCOPY USING USB DIGITAL MICROSCOPE IN THE ASSESSMENT OF MICROCIRCULATION IN DIABETES MELLITUS

Sign Language Recognition System Using SIFT Based Approach

Recognition of sign language gestures using neural networks

ABSTRACT I. INTRODUCTION

Design of Palm Acupuncture Points Indicator

Facial expression recognition with spatiotemporal local descriptors

Semi-automatic Thyroid Area Measurement Based on Ultrasound Image

3. MANUAL ALPHABET RECOGNITION STSTM

TURKISH SIGN LANGUAGE RECOGNITION USING HIDDEN MARKOV MODEL

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE) Vol 5, Issue 3, March 2018 Gesture Glove

MRI Image Processing Operations for Brain Tumor Detection

Hand Gestures Recognition System for Deaf, Dumb and Blind People

Sign Language to English (Slate8)

doi: / _59(

Video-Based Finger Spelling Recognition for Ethiopian Sign Language Using Center of Mass and Finite State of Automata

Image processing applications are growing rapidly. Most

Sign Language in the Intelligent Sensory Environment

Sign Language to Number by Neural Network

Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction

Available online at ScienceDirect. Procedia Computer Science 92 (2016 )

Unsupervised MRI Brain Tumor Detection Techniques with Morphological Operations

Automated Assessment of Diabetic Retinal Image Quality Based on Blood Vessel Detection

HAND GESTURE RECOGNITION USING ADAPTIVE NETWORK BASED FUZZY INFERENCE SYSTEM AND K-NEAREST NEIGHBOR. Fifin Ayu Mufarroha 1, Fitri Utaminingrum 1*

Cancer Cells Detection using OTSU Threshold Algorithm

EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE

AUTOMATIC DIABETIC RETINOPATHY DETECTION USING GABOR FILTER WITH LOCAL ENTROPY THRESHOLDING

Clustering of MRI Images of Brain for the Detection of Brain Tumor Using Pixel Density Self Organizing Map (SOM)

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)

Tumor Detection using Normalized Cross Co-Relation

Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor

Sign Language Interpretation Using Pseudo Glove

Research Article. Automated grading of diabetic retinopathy stages in fundus images using SVM classifer

Analysis of Speech Recognition Techniques for use in a Non-Speech Sound Recognition System

Tumor Detection In Brain Using Morphological Image Processing

Brain Tumor Detection using Watershed Algorithm

EXTRACT THE BREAST CANCER IN MAMMOGRAM IMAGES

Errol Davis Director of Research and Development Sound Linked Data Inc. Erik Arisholm Lead Engineer Sound Linked Data Inc.

Hand Sign to Bangla Speech: A Deep Learning in Vision based system for Recognizing Hand Sign Digits and Generating Bangla Speech

International Journal for Science and Emerging

Segmentation and Analysis of Cancer Cells in Blood Samples

Noise-Robust Speech Recognition Technologies in Mobile Environments

PAPER REVIEW: HAND GESTURE RECOGNITION METHODS

A Survey on Hand Gesture Recognition for Indian Sign Language

Figure 1: The relation between xyz and HSV. skin color in HSV color space from the extracted skin regions. At each frame, our system tracks the face,

HAND GESTURE RECOGNITION FOR HUMAN COMPUTER INTERACTION

A Review on Feature Extraction for Indian and American Sign Language

Labview Based Hand Gesture Recognition for Deaf and Dumb People

Using Deep Convolutional Networks for Gesture Recognition in American Sign Language

Microphone Input LED Display T-shirt

Brain Tumor segmentation and classification using Fcm and support vector machine

Embedded Based Hand Talk Assisting System for Dumb Peoples on Android Platform

1 Introduction. Abstract: Accurate optic disc (OD) segmentation and fovea. Keywords: optic disc segmentation, fovea detection.

Edge Detection Techniques Using Fuzzy Logic

Recognition of Hand Gestures by ASL

Quality Assessment of Human Hand Posture Recognition System Er. ManjinderKaur M.Tech Scholar GIMET Amritsar, Department of CSE

A New Approach For an Improved Multiple Brain Lesion Segmentation

International Journal of Multidisciplinary Approach and Studies

DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

Skin cancer reorganization and classification with deep neural network

Implementation of Automatic Retina Exudates Segmentation Algorithm for Early Detection with Low Computational Time

EXTRACTION OF RETINAL BLOOD VESSELS USING IMAGE PROCESSING TECHNIQUES

Sign Language Recognition using Webcams

Contour-based Hand Pose Recognition for Sign Language Recognition

INDIAN SIGN LANGUAGE RECOGNITION USING NEURAL NETWORKS AND KNN CLASSIFIERS

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition

1 Pattern Recognition 2 1

International Journal of Software and Web Sciences (IJSWS)

SPEECH TO TEXT CONVERTER USING GAUSSIAN MIXTURE MODEL(GMM)

Research Proposal on Emotion Recognition

Detection of Glaucoma and Diabetic Retinopathy from Fundus Images by Bloodvessel Segmentation

An Approach to Hand Gesture Recognition for Devanagari Sign Language using Image Processing Tool Box

Motivation: Attention: Focusing on specific parts of the input. Inspired by neuroscience.

Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics

A framework for the Recognition of Human Emotion using Soft Computing models

Skin color detection for face localization in humanmachine

ISSN: [Jain * et al., 7(4): April, 2018] Impact Factor: 5.164

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

A Smart Texting System For Android Mobile Users

[Solunke, 5(12): December2018] ISSN DOI /zenodo Impact Factor

ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES

A Hierarchical Artificial Neural Network Model for Giemsa-Stained Human Chromosome Classification

Real Time Hand Gesture Recognition System

Progress in Automated Computer Recognition of Sign Language

Chapter 1. Fusion of Manual and Non-Manual Information in American Sign Language Recognition

Communication Interface for Mute and Hearing Impaired People

Improved Intelligent Classification Technique Based On Support Vector Machines

Neuromorphic convolutional recurrent neural network for road safety or safety near the road

The Leap Motion controller: A view on sign language

Improving the Efficacy of Automated Sign Language Practice Tools

A Real-Time Large Vocabulary Recognition System For Chinese Sign Language

Transcription:

134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty of Engineering, University of Peradeniya, Sri Lanka. Email: shalinimarian@gmail.com, 2 jeraldtownly@gmail.com, 3 psabesane09@gmail.com, 4 maheshid@ee.pdn.ac.lk ABSTRACT Speech impaired people are silent suffers of the society when it comes to communication between normal people due to less transparency in their communication. They use sign languages for communication. However, if they are able to express themselves effectively, it could benefit the nation as well as the world. The goal of this research is to develop a tool that will help a speech impaired person to communicate with a person who is not aware of sign languages with the help of modern technology. This paper presents a low cost approach for two handed sign language recognition system. The research addresses three fundamental issues of still gesture based human computer interaction. After applying pre-processing techniques like cropping, rotation, colour filtering, the mapping was done using correlation coefficient method and nail detection method. The accuracy of the system was tested with real time simulation setup using MATLAB. The research outcome will help to fill the gap created by the non-existence of simple, efficient application to convert dynamic gestures made by the signer to a predefined word, phrase or command. Key words: Correlation, Finger nail detection, Image processing, Two handed signs 1. INTRODUCTION Speech and hearing impaired people cannot communicate with others as normal people do as they have some difficulty of hearing and speaking. For the communication they use signs which is a visual form of communication and is an organized collection of gestures. As most people are not aware of these hand gestures there exist a gap between communications of the two parties. In order to fill this gap, a translation platform is needed. The importance of such a platform is the ability to connect two parties. But implementing such a platform is challenging as sign languages are different from country to country and region to region. To date, most work on sign language recognition has employed expensive wired Data gloves which the user must wear [9]. In addition these systems have mostly concentrated on finger orientation. In literature there exist some efforts on computer vision based [4] sign recognition systems for many sign languages around the world. Two hand sign Language recognition systems for disabilities is being a new research area of Hand recognition system for many languages including Bangla language [5], American Sign Language [4], Korean sign language [7] and Japanese Sign Language [8]. In the Bangla two hand sign language recognition system, the authors have proposed a method which consist of two steps; i) refinement and ii) recognition. Initially in refinement, a Red-Green- Blue (RGB) color model is adopted to select heuristically threshold value for detecting candidate region and the region is obtain by colour segmentation and filtering. Finally, statistically based template matching technique is used for recognition. In American sign Language recognition system [4] they have used Hidden Markove Model (HMM) to implement the recognition system. They have demonstrated that HMMs can improve the robustness of their recognition system even on a small scale, together with breaking down the signs into phonemes, which provided a powerful and potentially scalable framework for modeling their hand gesture recognition system. The Korean sign language recognition system [7] presents a vision-based recognition system of Korean manual alphabet (KMA) which is a subset of Korean Sign Language. The KMA system can recognize skin-colored human hands by implemented fuzzy min-max neural network algorithm. As described above, although there are many number of approaches for sign language recognition, most of them use a coloured glove or a coloured band to identify two hands separately. Very little number of research is carried out without using any of these wearable external

135 components such as coloured gloves and coloured bands. The research presented is carried out without using any of these external materials and using only the colour separation of pixel of the skin in an image to recognise. Therefore any user can use this system without requiring additional components and materials. This is the key advantage of the proposed system. 2. METHODOLOGY 2.1. Two Handed Gesture Mapping Input Black & white image Rotate the hand gesture Crop Resize Then that image is converted to binary image by defining a threshold. The threshold was determined as 0.65 of the white to gray scale, through Monte Carlo simulations. The resulted binary image accuracy is highly dependent on lighting condition at which the image is captured. If the lighting intensity is sufficient to capture the image with its natural colors or closer to natural colors then the binary image is said to be noise free. Next the hand gesture in the binary image is rotated, so that non-dominant hand becomes vertical to the x-axis, when the hands not correctly oriented. This is done by drawing a line connecting the tip of the middle finger (highest point) and the middle of the fore arm as shown in the Figure 2(b) and rotating the line so that it becomes vertical. This is performed because the different orientations of the hand gesture affects the accuracy of the system. After rotating the hand gesture, unnecessary segment of the arm is removed by cropping the image. Then image is taken to its default pixel values (640x480) by stretching in order to convert all the images to same resolution. Correlation Higher correlation values (a) (b) Nail Detection Matching (c) (d) Output Figure 1: High level architecture of still gesture mapping Two handed gesture matching system presented in this paper uses several steps as shown in the Figure 1. In this system, a green background is used to capture the image for the simplicity of the implementation. First, the RGB image captured from the web camera is separated into the three colour matrices, red (R), green (G) and blue (B). Then G matrix is subtracted from the R matrix because it was experimentally found that red is the most dominant color of the skin [3]. The obtained image is the gray image in Figure 2 (a). Figure 2: (a)hand gesture (b)drawing the line (c)rotating the hand gesture (d)cropping extra portions and resizing After converting the input image into predefined pixel size, it is ready to analyze the correlation with database entries. The Input image is correlated with each and every entry in the database and correlation coefficient is obtained. Then the entities with the maximum correlation are found. The correlation coefficient equation can be calculated from eq. (1) [10]. (01)

136 In this research ten images are used as the database. The images which have higher correlation values with the input could be assumed as possible candidates for the correct gesture. Therefore maximum five correlation coefficient values are selected. Then corresponding images which have higher correlation coefficient values are considered further to be checked using nail detection method while images which have lower correlation coefficient values are neglected. Nail detection method is applied for those selected five images. By analyzing the colour information histogram of the sample hand gestures, it is found that red and blue colours are more dominant than green in nail potion in a hand gesture image. Using this finding, a threshold value, which is obtained from analyzing the histogram of RGB image, is set to extract the nail position from the input. When defining threshold values, different colour filters in spatial domain are defined as human skins are spread over wide pixel range. Therefore by changing filter parameters, the accuracy of the system is improved. And approximate range of area is defined as the nail portion in order to remove small portions affected by noises. The number of small portions equal to the number of nails. Number of nails equals to the number of fingers. Using this method number of fingers in the dominant hand can be obtained. It is shown in the Figure 3. Figure 4: Database gestures The hand gestures are made against a green color background. The proposed still gesture mapping prototype was tested in real time with 14 random participants, against a database of 10 signs shown in Figure 5. Real time input Matched gesture in database Figure 3: Nail detection method If the number of fingers in input image is equivalent to the number of fingers in any database image, it is the matched image. 3. RESULTS AND DISCUSSIONS Figure 4 shows the database images which are used to match two handed gestures. It consists of 10 images. The set up for the experiment is designed using TOSHIBA web camera which has 1280 pixel width and 800 height resolution.

137 Figure 5: Real and matched images in still gesture mapping The success percentage of this method is calculated using eq. (02). (02) The same two handed gesture is obtained using different participants and it is applied to the system. Then the success percentage is calculated using eq. (02). According to the results presented in Figure 6, the gesture id 9 th gets the lowest successive rate. That s because the 8 th gesture id and 9 th gesture id are somewhat identical to each other in front view and also it detect same number of nails for the two gestures. So the system gives a lower successive rate for gesture 9 th compared to others. use sign language. Further, the system is implemented using Matlab 2012 but any person can use this system as a standalone application even without opening Matlab command prompt. And the system is capable of analyzing real time gesture image as well as pre-saved gesture images. 5. REFERENCES [1] R. C. Gonzalez, R. E. Woods, and S. L. Eddins, "Digital image representation," in digital image processing using MATLAB, Gatesmark publishing, Second edition, 2009. [2] J. Alon, V. Athitsos, Q. Yuan, and S. Sclaroff, A unified framework for gesture recognition and spatiotemporal gesture segmentation, in: Proceedings of the IEEE Transaction of Pattern Analysis and Machine Inteligence(PAMI), 2009. [3] M. Dissanayake, H. Herath, W. Kumari and W. Senevirathne, Image processing based Sinhala sign language recognition system in:proceedings of the IESL Transactions of Technical Papers, vol.1-part B, published by IESL, 2013. [4] C. Vogler, and D. Metaxas, "A Framework for recognizing the simultaneous aspects of American sign language," in: Proceedings of the Journal of Computer Vision and Image Understanding,vol.81, 2001. [5] K. Deb, H. P. Mony, and S. Chowdhury, "Two-handed sign language recognition for Bangla character using normalized cross correlation," in: Proceedings of the Global Journal of Computer Science and Technology vol. 12, no. 3, 2012. [6] S. Kaur, and S. Malhotra, Hand gesture recognition system : A REVIEW, in: Proceedings of International Journal of Advances in Computer Science and Communication Engineering (IJACSCE) vol. 2, no. I, 2014. Figure 6: Success percentage for each image in the database 4. CONCLUSIONS The research paper presents a mechanism that can be used to recognize two handed sign languages using image processing techniques. The target of this project is to develop a prototype system using above mentioned mechanism that will help a person who is not aware of sign languages to communicate with deaf or inarticulate people who [7] S. Kim, W. Jang, and Z. Bien, A dynamic gesture recognition system for the Korean sign language (KSL), pp. 354-359, vol.26, 1996. [8] H. Sagawa, and M. Takeuchi, A method for recognizing a sequence of sign language words represented in a japanese sign language sentence, pp. 434-439, 2000. [9] Sutarman, M. A. Majid, J. M. Zain, A review on the development of Indonesian sign language recognition system in: Proceedings of Journal of

138 Computer Science, Published by Publications, pp 1496-1505, 2013. Science [10] Functions, Region and Image Properties, Image Analysis, Image Processing Toolbox, MATLAB R2012b Documentation, Published by MathWorks,Inc 1994-2015.