Available online at ScienceDirect. Procedia Computer Science 92 (2016 )

Similar documents
Hand Gestures Recognition System for Deaf, Dumb and Blind People

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

A Survey on Hand Gesture Recognition for Indian Sign Language

IDENTIFICATION OF REAL TIME HAND GESTURE USING SCALE INVARIANT FEATURE TRANSFORM

Labview Based Hand Gesture Recognition for Deaf and Dumb People

MRI Image Processing Operations for Brain Tumor Detection

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

ABSTRACT I. INTRODUCTION

Sign Language Recognition System Using SIFT Based Approach

Quality Assessment of Human Hand Posture Recognition System Er. ManjinderKaur M.Tech Scholar GIMET Amritsar, Department of CSE

An Approach to Hand Gesture Recognition for Devanagari Sign Language using Image Processing Tool Box

LITERATURE SURVEY 2.1 LITERATURE SURVEY CHAPTER 2

INDIAN SIGN LANGUAGE RECOGNITION USING NEURAL NETWORKS AND KNN CLASSIFIERS

Image processing applications are growing rapidly. Most

Recognition of Tamil Sign Language Alphabet using Image Processing to aid Deaf-Dumb People

Facial expression recognition with spatiotemporal local descriptors

Real Time Sign Language Processing System

A Review on Feature Extraction for Indian and American Sign Language

Real Time Hand Gesture Recognition System

Detection of Glaucoma and Diabetic Retinopathy from Fundus Images by Bloodvessel Segmentation

Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction

CURRICULUM VITAE. id : Contact No. (Resi) : Mobile :

OFFLINE CANDIDATE HAND GESTURE SELECTION AND TRAJECTORY DETERMINATION FOR CONTINUOUS ETHIOPIAN SIGN LANGUAGE

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE) Vol 5, Issue 3, March 2018 Gesture Glove

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition

Implementation of image processing approach to translation of ASL finger-spelling to digital text

Hand Gesture Recognition for Sign Language Recognition: A Review

EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE

Automatic Classification of Breast Masses for Diagnosis of Breast Cancer in Digital Mammograms using Neural Network

Available online at ScienceDirect. Procedia Computer Science 102 (2016 ) Kamil Dimililer a *, Ahmet lhan b

International Journal of Advance Engineering and Research Development EARLY DETECTION OF GLAUCOMA USING EMPIRICAL WAVELET TRANSFORM

Segmentation of Tumor Region from Brain Mri Images Using Fuzzy C-Means Clustering And Seeded Region Growing

Gesture Recognition using Marathi/Hindi Alphabet

Gabor Wavelet Approach for Automatic Brain Tumor Detection

HAND GESTURE RECOGNITION USING ADAPTIVE NETWORK BASED FUZZY INFERENCE SYSTEM AND K-NEAREST NEIGHBOR. Fifin Ayu Mufarroha 1, Fitri Utaminingrum 1*

REVIEW ON ARRHYTHMIA DETECTION USING SIGNAL PROCESSING

NAILFOLD CAPILLAROSCOPY USING USB DIGITAL MICROSCOPE IN THE ASSESSMENT OF MICROCIRCULATION IN DIABETES MELLITUS

Sign Language to Number by Neural Network

Hand Gesture Recognition of Indian Sign Language to aid Physically impaired People

International Journal of Computer Sciences and Engineering. Review Paper Volume-5, Issue-12 E-ISSN:

A Survey on Brain Tumor Detection Technique

ScienceDirect. Sign Language Unification: The Need for Next Generation Deaf Education

Hand Sign to Bangla Speech: A Deep Learning in Vision based system for Recognizing Hand Sign Digits and Generating Bangla Speech

Keywords Fuzzy Logic, Fuzzy Rule, Fuzzy Membership Function, Fuzzy Inference System, Edge Detection, Regression Analysis.

Extraction of Blood Vessels and Recognition of Bifurcation Points in Retinal Fundus Image

Available online at ScienceDirect. Procedia Technology 24 (2016 )

Communication Interface for Mute and Hearing Impaired People

A Hierarchical Artificial Neural Network Model for Giemsa-Stained Human Chromosome Classification

PAPER REVIEW: HAND GESTURE RECOGNITION METHODS

Unsupervised MRI Brain Tumor Detection Techniques with Morphological Operations

International Journal for Science and Emerging

Automated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature

Lung Region Segmentation using Artificial Neural Network Hopfield Model for Cancer Diagnosis in Thorax CT Images

Image Enhancement and Compression using Edge Detection Technique

International Journal of Multidisciplinary Approach and Studies

An efficient method for Segmentation and Detection of Brain Tumor in MRI images

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

Improved Intelligent Classification Technique Based On Support Vector Machines

Sign Language Interpretation Using Pseudo Glove

Australian Journal of Basic and Applied Sciences

Hand Gesture Recognition: Sign to Voice System (S2V)

Edge Detection Techniques Based On Soft Computing

A framework for the Recognition of Human Emotion using Soft Computing models

Available online at ScienceDirect. Procedia Computer Science 93 (2016 )

Automatic Classification of Perceived Gender from Facial Images

VISION BASED MULTI-FEATURE HAND GESTURE RECOGNITION FOR INDIAN SIGN LANGUAGE MANUAL SIGNS

Clustering of MRI Images of Brain for the Detection of Brain Tumor Using Pixel Density Self Organizing Map (SOM)

Sign Language to English (Slate8)

Detection and Recognition of Sign Language Protocol using Motion Sensing Device

[Kiran, 2(1): January, 2015] ISSN:

Reading Assignments: Lecture 18: Visual Pre-Processing. Chapters TMB Brain Theory and Artificial Intelligence

PNN -RBF & Training Algorithm Based Brain Tumor Classifiction and Detection

Automatic Hemorrhage Classification System Based On Svm Classifier

Brain Tumor segmentation and classification using Fcm and support vector machine

ADAPTIVE BLOOD VESSEL SEGMENTATION AND GLAUCOMA DISEASE DETECTION BY USING SVM CLASSIFIER

Automatic Detection of Heart Disease Using Discreet Wavelet Transform and Artificial Neural Network

AVR Based Gesture Vocalizer Using Speech Synthesizer IC

Development of 2-Channel Eeg Device And Analysis Of Brain Wave For Depressed Persons

k-nn Based Classification of Brain MRI Images using DWT and PCA to Detect Different Types of Brain Tumour

The Application of Image Processing Techniques for Detection and Classification of Cancerous Tissue in Digital Mammograms

Preprocessing, Segmentation and Matching of Dental Radiographs used in Dental Biometrics

Early Detection of Lung Cancer

N RISCE 2K18 ISSN International Journal of Advance Research and Innovation

Lung Cancer Diagnosis from CT Images Using Fuzzy Inference System

Cancer Cells Detection using OTSU Threshold Algorithm

COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION

Implementation of Automatic Retina Exudates Segmentation Algorithm for Early Detection with Low Computational Time

Recognition of Hand Gestures by ASL

CLASSIFICATION OF BRAIN TUMOUR IN MRI USING PROBABILISTIC NEURAL NETWORK

Mammogram Analysis: Tumor Classification

Diabetic Retinopathy Classification using SVM Classifier

Quick detection of QRS complexes and R-waves using a wavelet transform and K-means clustering

ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES

Primary Level Classification of Brain Tumor using PCA and PNN

AUTOMATIC DIABETIC RETINOPATHY DETECTION USING GABOR FILTER WITH LOCAL ENTROPY THRESHOLDING

Skin cancer reorganization and classification with deep neural network

An Approach to Global Gesture Recognition Translator

Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and

SPEECH TO TEXT CONVERTER USING GAUSSIAN MIXTURE MODEL(GMM)

Lung Tumour Detection by Applying Watershed Method

Transcription:

Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 92 (2016 ) 455 460 2nd International Conference on Intelligent Computing, Communication & Convergence (ICCC-2016) Srikanta Patnaik, Editor in Chief Conference Organized by Interscience Institute of Management and Technology Bhubaneswar, Odisha, India COHST and Wavelet Features Based Static ASL Numbers Recognition Asha Thalange a *, Dr. S. K. Dixit a Assistant Professor, E&TC Dept., Walchand Institute of Technology, Solapur 413006, Maharashtra, India Abstract Bridging communication gap between the deaf and dumb people with the common man is a big challenge. A sign language recognition system could provide an opportunity for the deaf and dumb to communicate with non-signing people without the need for an interpreter. Research in the area of Sign language recognition has become very significant due to various challenges faced while capturing of the sign. Not a single efficient methodology or algorithm is developed which overcomes all the difficulties and recognizes all the signs with cent percent accuracy. This paper proposes two new feature extraction techniques of Combined Orientation Histogram and Statistical (COHST) Features and Wavelet Features for recognition of static signs of numbers 0 to 9, of American Sign Language (ASL). The system performance is measured by extracting four different features of Orientation Histogram, Statistical Measures, COHST Features and Wavelet Features for training and recognition of ASL numbers individually using neural network. It is observed that COHST method forms a strong feature than the individual Orientation Histogram and Statistical Features giving higher average recognition rate. Of all the System designed for static ASL numbers recognition, Wavelet features based system gives the best performance with maximum average recognition rate of 98.17%. 2016 2014 The Authors. Published by by Elsevier B.V. B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Selection and peer-review under responsibility of scientific committee of Missouri University of Science and Technology. Peer-review under responsibility of the Organizing Committee of ICCC 2016 Keywords: American Sign Language numbers; Gesture Recognition; Neural Network; Orientation Histogram; Statistical Features; Combined Orientation Histogram and Statistical (COHST) Features; Wavelet Features. * Corresponding author. E-mail address:ashathalange@gmail.com 1877-0509 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of the Organizing Committee of ICCC 2016 doi:10.1016/j.procs.2016.07.367

456 Asha Thalange and S.K. Dixit / Procedia Computer Science 92 ( 2016 ) 455 460 1. Introduction Computer is used by many people either at their work or in their spare-time. Special input and output devices have been designed over the years with the purpose of easing the communication between computers and humans, the two most known are the keyboard and mouse [1]. Every new device can be seen as an attempt to make the computer more intelligent and making humans able to perform more complicated communication with the computer. This has been possible due to the result oriented efforts made by computer professionals for creating successful human computer interfaces (HCI). Gestures play an important role as input to HCI. Gestures are the non-verbally exchanged information. A person can perform number of gestures at a time. Gesture recognition has significant application in sign language recognition. Sign language allows the deaf and dumb to communicate with each other and the world they live in. The literature reports the sign language recognition using methods of orientation histogram use of artificial intelligence, Generic Fourier Descriptor, local linear embedding, Neural Network shape fitting, object based key frame selection, Haar wavelet representations, Open-finger Distance Feature Measurement Technique, etc. As stated many people have tried to design an efficient and optimum sign language recognition system for various sign languages to detect signs of alphabets, numbers, words, sentences, etc. But still they are not able to come up with a method which is able to recognize the signs cent per cent. This paper deals with two feature extraction techniques implemented for American Sign Language (ASL) numbers recognition and their comparison with respect to average recognition rate. Fig. 1 shows the different static gestures in an ASL for numbers 0 to 9. Fig. 1 American Sign Language Numbers 2. Related Work In sign language recognition, shape representation techniques that will sufficiently describe the shape of the hand are used which also have fast computations speed to benefit in real-time recognition. These techniques should also provide translational, rotational, and scaling invariance to make system robust. Myron W. Krueger first proposed gesture recognition as a new mode of HCI in the middle of seventies [2]. Currently, there are various techniques that are used for hand gesture recognition. Initially glove based recognition was widely used. Zimmerman [3] developed a VPL data glove which was connected to the computer to recognize signs. The glove based recognition mostly dealt with measurement of the bending of fingers, the position and orientation of the hand in 3-D space, etc. In vision-based gesture recognition, in dynamic environment hand-shape segmentation is one of the big problem. It is simplified by using visual markings on the hands. Also the colour markings on the gloves captured by the cameras are also used for sign language recognition. Some researchers have implemented sign language and pointing gesture recognition based on different marking modes [4]. The review of the work carried out in last twenty years are presented by Arpita Ray Sarkar et al. [5] along with brief comparison to analyze the difficulties encountered by these systems, as well as the limitation. Key issues of hand gesture recognition system and their challenges are discussed by Rafiqul Zaman Khan [6]. Also a review on various recent methods of gestures recognition system are presented and compared based on research results obtained and database used along with their advantages and disadvantages. A. Karami et al. [7] presented a system using Wavelet transform and neural networks (NN) for recognizing static gestures of alphabets in Persian sign language (PSL). It is able to recognize selected 32 PSL alphabets with an average recognition rate of 94.06%. Alaa Barkoky [8] proposed a thinning method to recognize the images of numbers in Persian sign language (PSL). In this the thinned image is used to get the real endpoints which have been used for recognition. The method proved good for realtime recognition and was independent of hand rotation and scaling. For 300 images it gave average recognition rate of

Asha Thalange and S.K. Dixit / Procedia Computer Science 92 ( 2016 ) 455 460 457 96.6 %. Qutaishat Munib et al. [9] performed ASL recognition using Hough transform and neural networks. Here a total of only 20 different signs of alphabets and numbers were used. The performance of the system was measured by varying the threshold level for Canny edge detection and the number of samples for each sign used. The average recognition rate obtained was 92.3% for the threshold value of 0.25. Kenny Teng et al. [10] proposed a vision based recognition for ASL Numbers. In this two methods for recognition and classification were presented boundary contour & medial axis (skeleton & thinning). Another vision based common system for Indian Sign Language Alphabets and Numerals recognition was proposed by Geetha M et al. [11] which used B-Spline Approximation and SVM classifier. 50 samples of each alphabet from A - Z and numbers from 0-5 were used. Dipali Rojasara et al. [12] also proposed a common system for Indian Sign Language recognition in real time using Wavelet Transform and Principle Component Analysis. They used a Euclidean distance as a classification technique. 31 signs including A to Z alphabets & one to five numbers were considered. S. N. Omkar et al. [13] also proposed a Thinning algorithm for Sign Language Recognition to detect alphabets and numbers together. Here the input was taken from a video data stream of a human hand gesturing all the signs of the ASL. Five different video sets were used. Each of them contained all the signs for numbers (1 to10) and alphabets (A to Z) of the ASL. Many people have tried to design an efficient and optimum sign language recognition system for various sign languages to detect signs of alphabets, numbers, words, sentences, etc. But still they are not able to come up with a method which is able to recognize the signs cent per cent. 3. System Description The block diagram of the system implemented is as shown in the fig. 2. Three different features Orientation Histogram, Statistical Parameters and Wavelet features are extracted for recognition. Also the combination of Orientation Histogram and Statistical Parameters together known as COHST is used as a single feature. Thus, four different systems are built by considering features of Orientation Histogram, Statistical Parameters, Combined Orientation Histogram and Statistical (COHST) Features and Wavelet features individually, for static ASL numbers (0 to 9) recognition. Fig. 2 System Block Diagram The system functioning is divided into four main modules namely: Image capture and Pre-processing, Segmentation and Image cropping, Feature extraction, Training and Classification. 3.1. Image capture and Pre-processing The colour image of the gesture of an ASL number is captured by a 5 megapixel web-camera, concentrating on the palm of the hand with a plane black background. The plane background is used for simplicity of segmentation process. This colour image is converted to gray scale and then resized to 256 x 420 pixels. 30 images per sign 0 to 9 are captured for training along with a separate set of, a total of 376 images of all numbers for testing, by a single signer. The images are captured with varying scale and rotating hand between +45 and -45 degree in all directions in order to make system robust.

458 Asha Thalange and S.K. Dixit / Procedia Computer Science 92 ( 2016 ) 455 460 3.2. Segmentation and Image cropping In order to make the image scale invariant, hand is segmented from the background from the preprocessed image and cropped to concentrate only the palm. Thresholding is used for segmentation and image is converted to binary with white hand and black background. The process of segmentation of hand from background is very important. The further process of feature extraction and recognition is dependent on the proper segmentation process. Median filtering and morphological operations are performed on segmented image to remove noise. Limited extent of image thinning is applied to median filtered binary image in order to get clear separation among the edges and enhance the shape. Considering the extreme points of the hand in all the four directions, the cropped binary image of the hand from the background is obtained. A new image is obtained by filling the respective white pixels of the cropped image by the gray values from its original gray image. This cropped gray and cropped binary image is resized to 128x128 pixels in order to make the sign image, scale independent. 3.3. Feature Extraction Features of Orientation Histogram and Statistical parameters are extracted from cropped and resized gray image and Wavelet features are extracted from cropped and resized binary image. For Orientation Histogram, the gradients in x direction are obtained by convolving the image by x = [0-1 1] and in y direction by y = [0 1-1] T resulting into two gradient images dx and dy. Gradient direction is then obtained by using tan -1 of dy/dx. Converting this matrix with the radian values to degrees, scanning is done to get the count of degrees in the different histogram bins which is used as feature vector. This feature obtained is of length 18. In order to extract the statistical measures, the gray scale 2D image is converted to 1D. Here six parameters such as mean, standard deviation, variance, coefficient of variance, range, root mean square of successive difference are calculated. These 18 and 6 features are combined to form a COHST feature of length 24. The cropped and resized binary image is used as the input image for feature extraction of 2D DWT using Haar Wavelet. Initially, the 1D DWT is applied along the rows and then the 1D DWT is applied along the columns of the first-stage result, generating four sub-band regions in the transformed space: LL, LH, HL and HH. The LL band is a down-sampled (by a factor of two) version of the original image. The LH band contains horizontal features, while the HL band contains vertical features of the original image. Finally, the HH band tends to isolate localized high-frequency point features in the image. This is the one level of decomposition. The same procedure is then applied to the LL band which is also known as an approximation coefficients matrix to get next levels of decomposition. In this work 4 levels of decomposition are done and the approximation coefficients matrix of 4 th level decomposition is used as a feature vector for training. This 4 th level LL image is of the size 8 x 8, i.e. of 64 pixels. This 2D data is converted into 1D of 64 elements which acts as Wavelet feature vector. 3.4. Training and Classification Multi-layered Feed-forward Back propagation Neural Network based classification engine is used here. Four different networks are built for recognizing numbers using Orientation Histogram, Statistical Parameters, COHST Features and Wavelet features separately. The features extracted from the training database images were used for training these networks. Once the networks were trained the features extracted from the actual test images were applied to these trained networks for recognition of an ASL numbers. Here the training and classification is performed by varying the number of neurons in the hidden layer to get the maximum average recognition rate. 4. Experimental Results and Discussions Fig. 3(a), 3(b), 3(c), 3(d) shows the resized gray scale image, segmented image, median filtered image and thinned image respectively for ASL number 3. Fig. 4(a), 4(b), 4(c), 4(d) shows segmented thinned image, cropped image, resized cropped binary image and resized gray scale cropped image respectively for ASL sign number 2. Fig. 5(a), 5(b), 5(c) shows the cropped and resized gray scale image, it s respective gradient images dx and dy in x and y direction respectively for ASL number 4. Fig. 6(a) shows the cropped binary image of size 128 x128 pixel whose

Asha Thalange and S.K. Dixit / Procedia Computer Science 92 ( 2016 ) 455 460 459 Wavelet features are being extracted. Fig. 6(b), 6(c), 6(d) shows the LH, HL, HH image of 1st level decomposition. Fig. 6(e), 6(f), 6(g), 6(h) shows the approximation coefficients matrix image LL at 1st, 2nd, 3 rd and 4 th level decomposition respectively for ASL number 8. Fig. 3: Segmented, median filtered and thinned image from gray scale image of sign of ASL number 3 Fig. 4: Binary and gray scale cropped and resized images of ASL number 2 (a) (b) (c) Fig. 5: Cropped gray image and Gradient images dx and dy of ASL number 4 (a) (b) (c) (d) (e) (f) (g) (h) Fig. 6: Different processing stages of sign of ASL number 8 Table 1 shows the summary of the recognition of total signs of an ASL numbers using all the four features. From the result it can be observed that COHST method acts as a strong feature than the individual Orientation Histogram and Statistical Features giving higher average recognition rate. Of all the System designed for static ASL numbers recognition, Wavelet features based system gives best performance with maximum average recognition rate.

460 Asha Thalange and S.K. Dixit / Procedia Computer Science 92 ( 2016 ) 455 460 Table 1. Classification Result of Test Images for ASL Numbers 0 To 9 using different features ASL Number Average % 0 1 2 3 4 5 6 7 8 9 of Recognition Total Images Used for testing 38 33 36 33 44 42 42 39 35 34 1) Statistical Measures 97.37 87.88 91.67 78.79 54.54 97.62 59.52 51.28 40 88.23 74.69 % % of Recognition Using 2) Orientation Histogram 100 96.97 97.22 100 81.82 95.24 64.28 53.85 45.71 94.12 82.92 % 3) COHST 100 87.88 100 100 86.36 100 66.67 87.18 57.14 94.18 87.94 % 4) Wavelet Features 100 100 97.22 96. 97 95.45 100 100 94.87 97.14 100 98.17 % 6. Conclusions and Future Scope Compared to glove based and all other complex feature extraction techniques, Wavelet feature based system gives best performance with maximum average recognition rate and is a very simple and fast method of feature extraction techniques. Also multiple features when combined together form a strong feature vector for classification. In future combination of other features can be done along with some other classification engine to improve the recognition rate. The same methods can also be used for other static and dynamic signs of ASL as well as other sign languages and develop a mobile application. Acknowledgement This work is funded by Government of India through AICTE s Grant under Career Award for Young Teacher (CAYT). We thank Government of India for all the aid provided. References 1. Henrik Birk and Thomas Baltzer Moeslund, Recognizing Gestures From the Hand Alphabet Using Principal Component Analysis, Master s Thesis, Laboratory of Image Analysis, Aalborg University, Denmark, 1996. 2. Myron W. Krueger, Artificial Reality II, Addison-Wesley, Reading, 1991. 3. Thomas G. Zimmerman and Jaron Lanier, A Hand Gesture Interface Device, ACM SIGCHI/GI, pages 189-192, 1987. 4. James Davis, and Mubarak Shah, Recognizing hand gestures, ECCV, pages 331-340, Stockholm, Sweden, May 1994. 5. Arpita Ray Sarkar, G. Sanyal, S. Majumder, Hand Gesture Recognition Systems: A Survey, International Journal of Computer Applications (0975 8887) Volume 71 No.15, May 2013. 6. Rafiqul Zaman Khan, Noor Adnan Ibraheem, Hand Gesture Recognition: A Literature Review, International Journal of Artificial Intelligence & Applications (IJAIA), Vol.3, No.4, July 2012. 7. Ali Karami, Bahman Zanj, Azadeh Kiani Sarkaleh, Persian sign language (PSL) recognition using wavelet transform and neural networks, Expert Systems with Applications 38 (2011) 2661 2667, Elsevier Ltd. 8. Alaa Barkoky, Nasrollah M. Charkari, Static Hand Gesture Recognition of Persian Sign Numbers using Thinning Method, 978-1-61284-774-0/11 2011 IEEE 9. Qutaishat Munib, Moussa Habeeb, Bayan Takruri, Hiba Abed Al-Malik, American Sign Language (ASL) Recognition Based On Hough Transform And Neural Networks, Expert Systems with Applications (2007) 24 37, Science Direct. 10. Kenny Teng, Jeremy Ng, Shirlene Lim, Computer Vision Based Sign Language Recognition for Numbers. 11. Geetha M, Manjusha U C, A Vision Based Recognition of Indian Sign Language Alphabets and Numerals Using B-Spline Approximation, International Journal on Computer Science and Engineering (IJCSE), ISSN : 0975-3397 Vol. 4 No. 03 March 2012. 12. Dipali Rojasara, Nehal Chitaliya, Real Time Visual Recognition of Indian Sign Language using Wavelet Transform and Principle Component Analysis, International Journal of Soft Computing and Engineering (IJSCE) ISSN: 2231-2307, Volume-4, Issue-3, July 2014. 13. S. N. Omkar and M. Monisha, Sign Language Recognition Using Thinning Algorithm, ICTACT Journal On Image And Video Processing, August 2011, Volume: 02, Issue: 01, ISSN: 0976-9102.