Sign Language Number Recognition
|
|
- Philippa O’Neal’
- 5 years ago
- Views:
Transcription
1 Sign Language Number Recognition Iwan Njoto Sandjaja Informatics Engineering Department Petra Christian University Surabaya, Indonesia Nelson Marcos, PhD Software Technology Department De La Salle University Manila, Philippines Abstract Sign language number recognition system lays down foundation for handshape recognition which addresses real and current problems in signing in the deaf community and leads to practical applications. The input for the sign language number recognition system is 5000 Filipino Sign Language number video file with 640 x 480 pixels frame size and 15 frame/second. The color-coded gloves uses less color compared with other color-coded gloves in the existing research. The system extracts important features from the video using multi-color tracking algorithm which is faster than existing color tracking algorithm because it did not use recursive technique. Next, the system learns and recognizes the Filipino Sign Language number in training and testing phase using Hidden Markov Model. The system uses Hidden Markov Model (HMM) for training and testing phase. The feature extraction could track 92.3% of all objects. The recognizer also could recognize Filipino sign language number with 85.52% average accuracy. Keywords- computer vision, human computer interaction, sign language recognition, hidden markov model, hand tracking, multicolor tracking I. INTRODUCTION Sign language is local, in contrast with the general opinion which assumes it is universal. Different countries and at times even regions within a country have their own sign languages. In the Philippines, for example, there are 13 variations of Filipino Sign Language based on regions [1]. Sign language is a natural language for the deaf. It is a kind of visual language via primarily hands and arms movements (called manual articulators which consist of dominant hand and non-dominant hand) accompanying other parts of body, such as facial expression, eye movement, eyebrow movement, cheek movement, tongue movement, and lip motion (called non-manual signal) [2]. Most hearing people do not understand any sign language and know very little about deafness in general. Although many deaf people lead successful and productive lives, this communication barrier can have problematic effects on many aspects of their lives. There are three main categories in sign language recognition, namely handshape classification, isolated sign language recognition, and continuous sign classification. Handshape classification, or finger-spelling recognition, is one of the main topics in sign language recognition since handshape can express not only some concepts, but also special transition states in temporal sign language. During the period of , finger-spelling is the sign language. Sign language is just a string of signs. Isolated words are widely considered as the basic unit in sign language and many researchers [3] [4] focus on isolated sign language recognition. Some researchers [5] [6] also pay attention to continuous sign language recognition. A lot of works on continuous sign language recognition apply HMMs for recognition. The use of HMM offers the advantage of being able to segment a data stream into its continuous signs implicitly and thus bypasses the hard problem of segmentation entirely. The system architectures for sign language recognition can be categorized into two main classifications based on its input. First is datagloves-based, whose input is from gloves with sensor. The weakness of this approach is that it has limited movement. The advantage is its having higher accuracy. Second is vision-based, of which input is from camera (stereo camera or web/usb camera). The weakness of this approach is lower accuracy and consuming more computing power. The advantages are cheaper and less constraining than datagloves. To make human hand tracking easier, color-coded gloves are usually used. A combination of both architectures is also possible which is called hybrid/mix architecture. In vision-based approach, the architecture of the system is usually divided into two main parts. The first part is the feature extraction. The feature extraction part should extract important features from the video using computer vision method or image processing such as background subtraction, pupil detection, hand tracking, and hierarchical feature characterization (shape, orientation, and location). The second part is the recognizer. From the features already extracted and characterized, the recognizer should be able to learn the pattern from training data and recognize testing data correctly. The recognizer employs machine learning algorithms. Artificial Neural Network (ANN) and Hidden Markov Model (HMM) are the most common algorithms used. In vision-based approach, the camera could be single, two or more or special 3D. Stereo camera uses a two-camera configuration that imitates how human eyes work. The most
2 recent approach is virtual stereo camera which only uses one camera generating second camera virtually. The main problem in sign language recognition is involving multiple channels and simultaneous events that create combinatorial explosion and huge search space. This research also addresses problems in computer vision, machine learning, machine translation, and linguistic problems. This research could be seen as computer-based lexicon/dictionary from sign language phrase specifically numbers into words/texts. II. RELATED LITERATURE A vision-based medium Chinese sign language recognition (SLR) system [3] is developed using tiedmixture density hidden Markov models (TMDHMM). Their experiment is based on the single frontal view; only a USB color camera is employed and placed in front of the signer to collect the CSL video data, and has the image size of 320 X 240 pixels. In this system, the recognition vocabulary contains 439 CSL signs, including 223 two-handed and 216 one-handed signs. Their experimental results show that the proposed methods could achieve an average recognition accuracy of 92.5% on 439 signs, including 93.3% on twohanded signs and 91.7% on one-handed signs, respectively. In recent research of sign language recognition [4], a novel viewpoint invariant method for sign language recognition is proposed. The recognition task is converted to a verification task under the proposed method. This method verifies the uniqueness for a virtual stereo vision system, which is formed by the observation and template. The recognition vocabulary of this research contains 100 CSL signs. The image revolution is 320 X 240 pixels. The proposed method achieves an accuracy of 92% at rank 2. In recent work [7], the design and implementation of hand mimicking system is discussed. The system captures hand movement, analyzes it using MATLAB, and produces 3D hand graphical model that imitates the movements of the user hand using OpenGL. The system captures hand movement using two cameras and approximate the user 3D hand pose using stereovision techniques. Based on the test, the obtained average difference of ideal hand part orientation from actual orientation is about 20. Furthermore, 72.38% of the measured hand angular orientations is less than 20, while 86.35% of the test cases have angular orientations error which is less than 45. Two real-time hidden Markov model-based systems for recognizing sentence-level continuous American Sign Language (ASL) using a single camera to track the user's unadorned hands was presented [5]. For this recognition system, sentences consisting the form of personal pronoun, verb, noun, adjective, (the same) personal pronoun are to be recognized. Six personal pronouns, nine verbs, twenty nouns, and five adjective are included making up a total lexicon of forty words. The first system observes the user from desk mounted camera and achieves 91.9% word accuracy. The second system mounts the camera in a cap worn by the user and achieves 96.8% accuracy (97% with unrestricted grammar). A portable letter sign language translator is developed using specialized glove with flex/bend sensor [8]. Their system translates hand-spelled words to letters through the use of Personal Digital Assistant (PDA). They use Neuro- Fuzzy Classifier (NEFCLASS) for letter translation algorithms. Their system cannot recognize the letter M and N but it can recognize other letters with minimum accuracy of 65% (letter Z) maximum accuracy of 100% and average accuracy of 90.2%. A framework for recognizing American Sign Language (ASL) is developed using hidden Markov model [6]. The data set consist of 499 sentences, between 2 and 7 signs long, and a total of 1604 signs from a 22-sign vocabulary. They collect these data with an Ascension Technologies MotionStarTM system at 60 frames per second. In addition, they collect data from the right hand with a Virtual Technologies CybergloveTM, which records wrist yaw, pitch, and the joint and abduction angles of the fingers, also at 60 frames per second. The result shows clearly that the quadrilateral-based description of the handshape (95.21%) is far more robust than the raw joint angles (83.15%). The best result is achieved using PaHMM monitoring movement channel right hand and handshape right hand with 88.89% sentence accuracy and 96.15% word accuracy. CopyCat which is an educational computer game that utilizes computer gesture recognition technology to develop American Sign Language (ASL) skills in children ages 6-11 is presented [9]. Data from the children s signing is recorded using an IEEE 1394 video camera and using wireless accelerometers mounted in colored gloves. The dataset consist of 541 phrase samples and 1,959 individual sign samples of five children signing game phrases from a 22 word vocabulary. The vocabulary is limited to subset of ASL which includes single and double-handed signs, but does not include more complex linguistic construction such as classifier manipulation, facial gestures and level emphasis. Each phrase is a description of an encounter for the game character, Iris the cat. The students can give warning of the predator presence, such as go chase snake or identify the location of a hidden kitten, such as white kitten behind wagon. Bradsher et al. achieve an average word accuracy of 93.39% for the user dependent models. The user independent models are generated by training on a dataset consisting of four children and testing on the other child s dataset. Bradsher et al. achieve an average word accuracy of 86.28% for the user independent models. They achieve on average 92.96% of accuracy in word-level with 1.62% of standard deviation when they chose samples across all samples and users (they trained and tested using data from all students). A vision-based interface for controlling a computer mouse via 2D and 3D hand gestures was presented [10] [11]. The proposed algorithm addresses three different subproblems: (a) hand hypothesis generation (i.e., a hand
3 appears in the field of view for the first time) (b) hand hypothesis tracking in the presence of multiple, potential occluding objects (i.e. previously detected hands move arbitrarily in the field of view) and (c) hands hypothesis removal (i.e. a tracked hand disappears from the field of view). Their proposed algorithm also involves simple prediction that uses a simple linear rule to predict location of hand hypotheses at time t, based on their locations at time t-2 and t-1. Having already defined the contour of a hand, finger detection is performed by evaluating at several scales a curvature measure on contour points. As confirmed by several experiments, the proposed interface achieves accurate mouse positioning, smooth cursor movement and reliable recognition of gestures activating button events. Owing to these properties, their interface can be used as a virtual mouse for controlling any Windows application. III. ARCHITECTURAL DESIGN The general architectural design for sign language number recognition is shown in Fig. 1. The input of sign language number recognition system is Filipino Sign Language number video. In general, there are two main modules in sign language recognition architecture: namely, the feature extraction module, and recognizer module. The feature extraction module extracts important features from the video per frame. The recognizer module learns and recognizes the video from its features. The feature extraction module consists of face detection, hand tracking, and feature characterization. Face detection module is used to detect face area. Hand tracking module tracks hands movement, dominant hand, and nondominant hand. Feature characterization takes important features such as the position of the face as reference, position of dominant hand and its fingers, area of dominant hand and each finger, the orientation of dominant hand and each fingers, position of non-dominant hand, area of of nondominant hand, and non-dominant hand orientation. In this research, the feature characterization that is used as feature vectors is the position of dominant-hand s thumb in x and y coordinates and the x and y coordinate of others fingers relatively to the thumb position. The output of feature characterization becomes the input for second block which is the recognizer. The recognizer employs Hidden Markov Model. The recognizer consists of two main parts which are training module and testing module. In the training module, the recognizer learns the pattern of sign language number using annotated input from feature extraction module. In testing or verification module, the recognizer receives unknown input which is never be learned before yet annotated for verification purpose. Figure 1. System Architecture A. Feature Extraction The feature extraction module uses OpenCV library [12]. The detailed flowchart of feature extraction is shown in Fig. 2. For each frame in the video, the feature extraction module begins with calling a smooth procedure to eliminate noise from the camera. The frame size is 640 x 480 pixels in BGR (Blue, Green, and Red) color space. After smoothing the frame, feature extraction module converts frame s color space from BGR color space to Hue, Saturation and Value (HSV) color space. Figure 2. Feature extraction flowchart
4 The saturation and value filtering module extracts hue channel based on specific saturation and value parameter. The saturation and value filtering give two outputs: skin frame and color frame. Skin frame is processed by skin tracking module. The skin tracking module is basically color tracking module with different size filtering because face and nondominant hand have a larger area than each finger of dominant hand. Skin tracking procedure is producing a face ellipse. Skin tracking module also creates black-filled contour of the face on color frame for removing the lips in the color frame. The color frame is processed by color tracking module as shown in Fig. 2 with different parameters of hue range for each finger. The hue parameter of each finger is known by searching the maximum and minimum hue value for each finger. The hue parameter of each finger is not overlapping with other hue parameter. The color tracking module gives the ellipse area for each finger as result. The Merge algorithm is simply executing cvfindcontours procedure again with connected contours from color tracking procedure as input. Each contour is fitted by an ellipse. If the number of ellipses is more than two, the Merge algorithm returns the first ellipse, therefore Merge algorithm always returns the first detected ellipse. By always returning the first ellipse, the merge algorithm avoids a long/endless recursive process. The next module is Draw and Print module which draws each ellipse and prints its parameters in XML format. Fig. 3 shows a sample frame captured with resulting ellipses. Figure 3. Sample image with resulting ellipses There are 6 color trackers, one color tracker for skin color (face and non-dominant hand) and 5 color trackers for dominant hand, one for each finger of dominant hand. The ellipse and its parameters are shown in Fig. 2. The whole processes are repeated until no more frame processed. Finally, the feature characterization converts ellipse and its parameters into feature vectors. The feature vectors contain the position of dominant-hand s thumb in x and y coordinates and the x and y coordinates of other fingers relatively to the thumb position. The first two feature vectors is taken from the center coordinate (x,y) of thumb ellipse. The rest of feature vector is taken from the distance between thumb and other finger in x and y. Thus, there are 10 feature vectors for each frame. The feature vectors are saved in XML format. B. Recognizer The recognizer learns the pattern from the feature vectors that are generated by the feature extraction module using machine learning algorithms. Hidden Markov Model (HMM) is used as the machine algorithms. The Cambridge- University HMM Toolkit [13] is chosen to be used as HMM library. The recognizer consists of three main parts. The first part is data preparation module. In data preparation module, the recognizer generates all directories and files that are needed for HMM processes. The second part is HInit module. In HInit module, the recognizer creates HMM model for each sign language number and initializes the models using forward-backward algorithms using labeled training feature vectors from feature extraction module as its input. The last part is HModels module. In HModels module, the recognizer uses labeled training feature vector from feature extraction module to re-estimate the HMM models parameters using Baum-Welch method. After it has finished with the re-estimation of HMM models parameters, the recognizer recognizes the testing data which is not included in training data yet already labeled for verification purpose. The Viterbi algorithm is used to recognize the testing data. Lastly, the recognizer interprets and evaluates the result and generates report about the result. IV. RESULTS AND ANALYSIS A. Feature Extraction Table I shows the summary of result from feature extraction module in terms of time. Feature extraction module was running 5 hours 42 minutes and 35 second for extract the feature of 5000 Filipino sign language number videos. Feature extraction took a lot of time because it had to play all the video one by one. For playing video of number 1-9, it would take 2 seconds (±30 frames). Table I. Feature extraction results in terms of time Start time End time Time (HH:MM:SS) 2:32:55 PM 8:15:30 PM Duration 05:42:35 For playing the video of number , , , , , , , , , it would take 3 seconds (±45 frames). The video of remaining numbers took 4 seconds to play (±60 frames).
5 Thus, the total time for playing all the video was 5 hours 16 minutes and 50 seconds. The feature extraction module took a little longer time because it had to switch between one video to another video and saved the result to XML files. Table II shows the summary of result from feature extraction module in terms of accuracy. For each frame there are five objects to be tracked which represent five fingers. Non-trackable object means the color tracking module cannot track the object which is the finger. Incorrectly trackable object means the color tracking module detect the object but it found more than one object eventhough already using the merge algorithm. Table II. Feature extraction results in term of accuracy. Result Objects Correctly trackable objects 1,322,537 Non-trackable objects 109,814 Incorrectly trackable objects 4,664 Total objects 1,437,015 %Tracking 92.03% The feature extraction module could track 1,322,537 objects of 1,437,015 objects. In other words, 92.03% of all objects could be tracked. Little finger had the most record as untrackable object because of small size. The second most untrackable object was the index finger because it was occluded by the thumb finger in the beginning and the end of each video. The color tracking also detected more than one object although already applied the Merge algorithms but this is in very small number (only %). The causes of untrackable objects were occlusion, image blurring because of fast object movement, and changes in lighting condition. The occlusion happened when one finger occluded with another finger. For most of the time, the index finger was occluded by the thumb finger in the beginning and at the end of each video. Image blurring happened when the hand moved too fast for example in creating twin number signs (11, 22, 33, and so on) and every tens numbers (10, 20, 30, etc). The changes of lighting condition happened because the video was recorded using natural light from 9am to 3pm. The movement of the hand also created shadow and changed the lighting condition. B. Recognizer There are two types of validation method that were used in this research. The first validation method was fivefold validation. Five-fold validation generated five set of testing data and training data from five video samples for each number which is sample A, B, C, D, and E. For set A, it used first sample of each sign language number as the testing data and used the other samples of the same sign language number for training data. For set B, it used second sample and so on until set E which used the last sample of each sign language number as testing data and used the other samples of the same sign language number for training data. Thus, five-fold validation created five set of validation. Each set consist of 4000 data for training and 1000 data for testing. The second validation procedure was leave-one-out validation. Leave-one-out validation used all of the data except for one sample as training data, and the remaining one as test data. Leave-one-out validation done this for every possible permutation, and could take a very long time. For this research, leave-one-out validation creates the 120 sets of testing and training data. Initially, this research began with four-state HMM model and then increased the number of states of HMM model until found the maximum accuracy. After that, the experiment continued by adding skip states. The 10-state HMM without skip state has the highest average accuracy which is 85.52%. Table III. Recognizer results Set A Set B Set C Set D Set E Time (HH:MM:SS) HInit 32:8 31:32 31:36 31:45 31:33 HModels 2:50 3:1 3:13 3:15 3:8 Total 34:57 34:33 34:49 34:59 34:42 Result Correct Wrong Total Samples %Correct 76.70% 88.10% 89.00% 88.80% 85.00% Accuracy 76.70% 88.10% 89.00% 88.80% 85.00% The recognizer using 10-state HMM without skip states could achieve 85.52% accuracy in average. The maximum accuracy was 89.00% using the set C as input. The minimum accuracy was with 76.70% using the set A as input. The set A has the lowest accuracy since it was the first video attempt of the model to do the sign. Thus, the first sample had significant difference with other sample. Set C has the highest accuracy probably because the model already gets used to do the sign and produce more constant/similar sign. The average accuracy of leave-oneout validation is 85.52% same with five-fold validation result. This happened because the samples are similar with another. V. CONCLUSION Sign language number recognition system in this research was able to design a model for recognizing sign language number that was suitable for number in Filipino
6 Sign Language. The sign language number recognition system was also evaluated in terms of accuracy and time. The feature extraction could track 92.3% of all objects in 5 hours 16 minutes and 50 seconds using Intel Core 2 Duo E GHz computer with 2GB memory. It could be concluded from the feature extraction results that this research has already implemented computer vision techniques for robust and real-time color tracking which is used in feature extraction of dominant hand and skin (face and non-dominant hand). The recognizer also could recognize Filipino sign language number using the features from feature extraction module. The 10-state HMM without skip state has the highest average accuracy which is 85.52%. The total average running time for 10-state HMM without skip states was 34 minutes and 48 seconds. The leave-one-out validation for 10-state HMM without skip states results to the same accuracy, 85.52%. This research is the pioneer in sign language recognition in the Philippines. Thus, it is far from perfect but this research gave framework to be extended in future research. The video that was used as input in this research could be improved because the framing of the model seems too far. The model had her hand too far down. In natural discourse, the placement of the dominant hand is about 3-4 inches to the side (and and inch or so in front of) of the mouth which is called the finger-spelling space. Deaf signers who converse never look at the interlocutor's hand but at the eyes. This close placement of the hand to the face enables the signer to use peripheral vision in catching the manual signal completely. The signing space should be in the 3-dimensional space mid-torso to the top of the head, with a third of the shoulder in addition on either side. The video sample could also include all the variants of each number. For instance, there are 2 ways of signing 10, 16, 17, 18, 19 each. There are also additional unique signs for 21, 23, 25 (with internal movement). For further research, it is advisable that the research uses other color system such as YCrCb, CIE, etc. instead of HSV and using more advanced color tracking algorithm such as K-Means algorithms or other tracking algorithms such as Lucas Kanade Feature Tracker. The other possibility is using only skin color without gloves and fingertip detection algorithms for feature extraction. The recognizer module could use another machine learning algorithm for time series data such as fuzzy clustering, neuro fuzzy, etc. The exploration of grammar features of Hidden Markov Models Toolkit is also possible for further research.thanks to ACM SIGCHI for allowing us to modify templates they had developed. [2] Philippine Deaf Resource Center & Philippine Federation of the Deaf (2004). An Introduction to Filipino Sign Language. Part I-III. Philippine Deaf Resource Center, Inc, Quezon City, Philippines. [3] Zhang, L.-G., Chen, Y., Fang, G., Chen, X., & Gao, W. (2004). A vision-based sign language recognition system using tied-mixture density hmm. In ICMI '04: Proceedings of the 6th international conference on Multimodal interfaces, pages , New York, NY, USA. ACM. [4] Wang, Q., Chen, X., Zhang, L.-G., Wang, C., & Gao, W. (2007). Viewpoint invariant sign language recognition. Computer Vision and Image Understanding, 108: [5] Starner, T., Weaver, J., & Pentland, A. (1998). Real-time american sign language recognition using desk and wearable computer based video. Transactions on Pattern Analysis and Machine Intelligence, 20(12): [6] Vogler, C. & Metaxas, D. (2004). Handshapes and movements: Multiple-channel ASL recognition. In Springer Lecture Notes in Artificial Intelligence. Proceedings of the Gesture Workshop'03, Genova, Italy., pages [7] Fabian, E. A., Or, I., Sosuan, L., & Uy, G. (2007). Vision-based hand mimicking system. In ROVISP07: Proceedings of the International Conference on Robotics, Vision, Information, and Signal Processing, Penang, Malaysia. [8] Aguilos, V. S., Mariano, C. J. L., Mendoza, E. B. G., Orense, J. P. D., & Ong, C. Y. (2007). APoL: A portable letter sign language translator. Master's thesis, De La Salle University Manila. [9] Brashear, H., Henderson, V., Park, K.-H., Hamilton, H., Lee, S., & Starner, T. (2006). American sign language recognition in game development for deaf children. In Assets '06: Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility, pages 79 86, New York, NY, USA. ACM. [10] Argyros, A. A. & Lourakis, M. I. A. (2004). Real time tracking of multiple skin-colored objects with a possibly moving camera. In the European Conference on Computer Vision (ECCV 04), volume 3, pages , Prague, Chech Republic. Springer-Verlag. [11] Argyros, A. A. & Lourakis, M. I. A. (2006). Vision-based interpretation of hand gestures for remote control of a computer mouse. In ECCV Workshop on HCI, pages 40 51, Graz, Austria. Springer Verlag. LNCS [12] Intel Software Product Open Source (2007). Open Source Computer Vision Library [online] Available: (March 6, 2008) [13] Young, S., Evermann, G., Gales, M., Hain, T., Kershaw, D., Liu, X., Moore, G., Odell, J., Ollason, D., Povey, D., Valtchev, V., & Woodland, P. (2006). The HTK Book. [online] Cambrigde University Engineering Department. Available: (March 6, 2008) REFERENCES [1] Philippine Federation of the Deaf (2005). Filipino Sign Language: A compilation of signs from regions of the Philippines Part 1. Philippine Federation of the Deaf.
7
Sign Language to Number by Neural Network
Sign Language to Number by Neural Network Shekhar Singh Assistant Professor CSE, Department PIET, samalkha, Panipat, India Pradeep Bharti Assistant Professor CSE, Department PIET, samalkha, Panipat, India
More informationReal Time Sign Language Processing System
Real Time Sign Language Processing System Dibyabiva Seth (&), Anindita Ghosh, Ariruna Dasgupta, and Asoke Nath Department of Computer Science, St. Xavier s College (Autonomous), Kolkata, India meetdseth@gmail.com,
More informationTWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING
134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty
More informationTURKISH SIGN LANGUAGE RECOGNITION USING HIDDEN MARKOV MODEL
TURKISH SIGN LANGUAGE RECOGNITION USING HIDDEN MARKOV MODEL Kakajan Kakayev 1 and Ph.D. Songül Albayrak 2 1,2 Department of Computer Engineering, Yildiz Technical University, Istanbul, Turkey kkakajan@gmail.com
More informationRecognition of sign language gestures using neural networks
Recognition of sign language gestures using neural s Peter Vamplew Department of Computer Science, University of Tasmania GPO Box 252C, Hobart, Tasmania 7001, Australia vamplew@cs.utas.edu.au ABSTRACT
More informationPAPER REVIEW: HAND GESTURE RECOGNITION METHODS
PAPER REVIEW: HAND GESTURE RECOGNITION METHODS Assoc. Prof. Abd Manan Ahmad 1, Dr Abdullah Bade 2, Luqman Al-Hakim Zainal Abidin 3 1 Department of Computer Graphics and Multimedia, Faculty of Computer
More informationProgress in Automated Computer Recognition of Sign Language
Progress in Automated Computer Recognition of Sign Language Barbara L. Loeding 1, Sudeep Sarkar 2, Ayush Parashar 2, and Arthur I. Karshmer 3 1 Department of Special Education, University of South Florida,
More informationImproving the Efficacy of Automated Sign Language Practice Tools
Improving the Efficacy of Automated Sign Language Practice Tools Helene Brashear Georgia Institute of Technology, GVU Center, College of Computing, Atlanta, Georgia, USA Abstract: brashear@cc.gatech.edu
More information1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1
1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present
More informationSign Language in the Intelligent Sensory Environment
Sign Language in the Intelligent Sensory Environment Ákos Lisztes, László Kővári, Andor Gaudia, Péter Korondi Budapest University of Science and Technology, Department of Automation and Applied Informatics,
More informationA Real-Time Large Vocabulary Recognition System For Chinese Sign Language
A Real-Time Large Vocabulary Recognition System For Chinese Sign Language Wang Chunli (1) GAO Wen (2) Ma Jiyong (2) (1) (Department of Computer, Dalian University of Technology, Dalian 116023) (2) (Institute
More informationLSA64: An Argentinian Sign Language Dataset
LSA64: An Argentinian Sign Language Dataset Franco Ronchetti* 1, Facundo Quiroga* 1, César Estrebou 1, Laura Lanzarini 1, and Alejandro Rosete 2 1 Instituto de Investigación en Informática LIDI, Facultad
More informationN RISCE 2K18 ISSN International Journal of Advance Research and Innovation
The Computer Assistance Hand Gesture Recognition system For Physically Impairment Peoples V.Veeramanikandan(manikandan.veera97@gmail.com) UG student,department of ECE,Gnanamani College of Technology. R.Anandharaj(anandhrak1@gmail.com)
More informationA Review on Feature Extraction for Indian and American Sign Language
A Review on Feature Extraction for Indian and American Sign Language Neelam K. Gilorkar, Manisha M. Ingle Department of Electronics & Telecommunication, Government College of Engineering, Amravati, India
More informationImplementation of image processing approach to translation of ASL finger-spelling to digital text
Rochester Institute of Technology RIT Scholar Works Articles 2006 Implementation of image processing approach to translation of ASL finger-spelling to digital text Divya Mandloi Kanthi Sarella Chance Glenn
More informationFinger spelling recognition using distinctive features of hand shape
Finger spelling recognition using distinctive features of hand shape Y Tabata 1 and T Kuroda 2 1 Faculty of Medical Science, Kyoto College of Medical Science, 1-3 Imakita Oyama-higashi, Sonobe, Nantan,
More informationAn Evaluation of RGB-D Skeleton Tracking for Use in Large Vocabulary Complex Gesture Recognition
An Evaluation of RGB-D Skeleton Tracking for Use in Large Vocabulary Complex Gesture Recognition Christopher Conly, Zhong Zhang, and Vassilis Athitsos Department of Computer Science and Engineering University
More informationInternational Journal of Engineering Research in Computer Science and Engineering (IJERCSE) Vol 5, Issue 3, March 2018 Gesture Glove
Gesture Glove [1] Kanere Pranali, [2] T.Sai Milind, [3] Patil Shweta, [4] Korol Dhanda, [5] Waqar Ahmad, [6] Rakhi Kalantri [1] Student, [2] Student, [3] Student, [4] Student, [5] Student, [6] Assistant
More informationModeling the Use of Space for Pointing in American Sign Language Animation
Modeling the Use of Space for Pointing in American Sign Language Animation Jigar Gohel, Sedeeq Al-khazraji, Matt Huenerfauth Rochester Institute of Technology, Golisano College of Computing and Information
More informationdoi: / _59(
doi: 10.1007/978-3-642-39188-0_59(http://dx.doi.org/10.1007/978-3-642-39188-0_59) Subunit modeling for Japanese sign language recognition based on phonetically depend multi-stream hidden Markov models
More informationRecognition of Tamil Sign Language Alphabet using Image Processing to aid Deaf-Dumb People
Available online at www.sciencedirect.com Procedia Engineering 30 (2012) 861 868 International Conference on Communication Technology and System Design 2011 Recognition of Tamil Sign Language Alphabet
More informationAgitation sensor based on Facial Grimacing for improved sedation management in critical care
Agitation sensor based on Facial Grimacing for improved sedation management in critical care The 2 nd International Conference on Sensing Technology ICST 2007 C. E. Hann 1, P Becouze 1, J. G. Chase 1,
More informationBuilding an Application for Learning the Finger Alphabet of Swiss German Sign Language through Use of the Kinect
Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 2014 Building an Application for Learning the Finger Alphabet of Swiss German
More informationFacial expression recognition with spatiotemporal local descriptors
Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box
More informationAn Approach to Global Gesture Recognition Translator
An Approach to Global Gesture Recognition Translator Apeksha Agarwal 1, Parul Yadav 2 1 Amity school of Engieering Technology, lucknow Uttar Pradesh, india 2 Department of Computer Science and Engineering,
More informationDeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation
DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation Biyi Fang Michigan State University ACM SenSys 17 Nov 6 th, 2017 Biyi Fang (MSU) Jillian Co (MSU) Mi Zhang
More informationThe American Sign Language Lexicon Video Dataset
The American Sign Language Lexicon Video Dataset Vassilis Athitsos 1, Carol Neidle 2, Stan Sclaroff 3, Joan Nash 2, Alexandra Stefan 3, Quan Yuan 3, and Ashwin Thangali 3 1 Computer Science and Engineering
More informationSign Language Recognition with the Kinect Sensor Based on Conditional Random Fields
Sensors 2015, 15, 135-147; doi:10.3390/s150100135 Article OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Sign Language Recognition with the Kinect Sensor Based on Conditional Random Fields
More informationSign Language to English (Slate8)
Sign Language to English (Slate8) App Development Nathan Kebe El Faculty Advisor: Dr. Mohamad Chouikha 2 nd EECS Day April 20, 2018 Electrical Engineering and Computer Science (EECS) Howard University
More informationIDENTIFICATION OF REAL TIME HAND GESTURE USING SCALE INVARIANT FEATURE TRANSFORM
Research Article Impact Factor: 0.621 ISSN: 2319507X INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK IDENTIFICATION OF REAL TIME
More informationSign Language Recognition System Using SIFT Based Approach
Sign Language Recognition System Using SIFT Based Approach Ashwin S. Pol, S. L. Nalbalwar & N. S. Jadhav Dept. of E&TC, Dr. BATU Lonere, MH, India E-mail : ashwin.pol9@gmail.com, nalbalwar_sanjayan@yahoo.com,
More informationSign Language Recognition using Kinect
Sign Language Recognition using Kinect Edon Mustafa 1, Konstantinos Dimopoulos 2 1 South-East European Research Centre, University of Sheffield, Thessaloniki, Greece 2 CITY College- International Faculty
More informationHand Sign to Bangla Speech: A Deep Learning in Vision based system for Recognizing Hand Sign Digits and Generating Bangla Speech
Hand Sign to Bangla Speech: A Deep Learning in Vision based system for Recognizing Hand Sign Digits and Generating Bangla Speech arxiv:1901.05613v1 [cs.cv] 17 Jan 2019 Shahjalal Ahmed, Md. Rafiqul Islam,
More informationInternational Journal of Software and Web Sciences (IJSWS)
International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) ISSN (Print): 2279-0063 ISSN (Online): 2279-0071 International
More informationGesture Recognition using Marathi/Hindi Alphabet
Gesture Recognition using Marathi/Hindi Alphabet Rahul Dobale ¹, Rakshit Fulzele², Shruti Girolla 3, Seoutaj Singh 4 Student, Computer Engineering, D.Y. Patil School of Engineering, Pune, India 1 Student,
More informationAccessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)
Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Matt Huenerfauth Raja Kushalnagar Rochester Institute of Technology DHH Auditory Issues Links Accents/Intonation Listening
More informationA Framework for Motion Recognition with Applications to American Sign Language and Gait Recognition
University of Pennsylvania ScholarlyCommons Center for Human Modeling and Simulation Department of Computer & Information Science 12-7-2000 A Framework for Motion Recognition with Applications to American
More informationVideo-Based Recognition of Fingerspelling in Real-Time. Kirsti Grobel and Hermann Hienz
Video-Based Recognition of Fingerspelling in Real-Time Kirsti Grobel and Hermann Hienz Lehrstuhl für Technische Informatik, RWTH Aachen Ahornstraße 55, D - 52074 Aachen, Germany e-mail: grobel@techinfo.rwth-aachen.de
More informationUsing Deep Convolutional Networks for Gesture Recognition in American Sign Language
Using Deep Convolutional Networks for Gesture Recognition in American Sign Language Abstract In the realm of multimodal communication, sign language is, and continues to be, one of the most understudied
More informationNoise-Robust Speech Recognition Technologies in Mobile Environments
Noise-Robust Speech Recognition echnologies in Mobile Environments Mobile environments are highly influenced by ambient noise, which may cause a significant deterioration of speech recognition performance.
More informationFace Analysis : Identity vs. Expressions
Hugo Mercier, 1,2 Patrice Dalle 1 Face Analysis : Identity vs. Expressions 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd Bâtiment A 99, route d'espagne
More informationThe Sign2 Project Digital Translation of American Sign- Language to Audio and Text
The Sign2 Project Digital Translation of American Sign- Language to Audio and Text Fitzroy Lawrence, Jr. Advisor: Dr. Chance Glenn, The Center for Advanced Technology Development Rochester Institute of
More informationChapter 1. Fusion of Manual and Non-Manual Information in American Sign Language Recognition
Chapter 1 Fusion of Manual and Non-Manual Information in American Sign Language Recognition Sudeep Sarkar 1, Barbara Loeding 2, and Ayush S. Parashar 1 1 Computer Science and Engineering 2 Special Education
More informationAVR Based Gesture Vocalizer Using Speech Synthesizer IC
AVR Based Gesture Vocalizer Using Speech Synthesizer IC Mr.M.V.N.R.P.kumar 1, Mr.Ashutosh Kumar 2, Ms. S.B.Arawandekar 3, Mr.A. A. Bhosale 4, Mr. R. L. Bhosale 5 Dept. Of E&TC, L.N.B.C.I.E.T. Raigaon,
More informationUsing Multiple Sensors for Mobile Sign Language Recognition
Using Multiple Sensors for Mobile Sign Language Recognition Helene Brashear & Thad Starner College of Computing, GVU Center Georgia Institute of Technology Atlanta, Georgia 30332-0280 USA {brashear, thad}@cc.gatech.edu
More informationDesign of Palm Acupuncture Points Indicator
Design of Palm Acupuncture Points Indicator Wen-Yuan Chen, Shih-Yen Huang and Jian-Shie Lin Abstract The acupuncture points are given acupuncture or acupressure so to stimulate the meridians on each corresponding
More informationA Real-time Gesture Recognition System for Isolated Swedish Sign Language Signs
A Real-time Gesture Recognition System for Isolated Swedish Sign Language Signs Kalin Stefanov KTH Royal Institute of Technology TMH Speech, Music and Hearing Stockholm, Sweden kalins@kth.se Jonas Beskow
More informationA Framework for Motion Recognition with Applications to American Sign Language and Gait Recognition
A Framework for otion Recognition with Applications to American Sign Language and Gait Recognition Christian Vogler, Harold Sun and Dimitris etaxas Vision, Analysis, and Simulation Technologies Laboratory
More informationScalable ASL sign recognition using model-based machine learning and linguistically annotated corpora
Boston University OpenBU Linguistics http://open.bu.edu BU Open Access Articles 2018-05-12 Scalable ASL sign recognition using model-based machine learning and linguistically annotated corpora Metaxas,
More informationLabview Based Hand Gesture Recognition for Deaf and Dumb People
International Journal of Engineering Science Invention (IJESI) ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 7 Issue 4 Ver. V April 2018 PP 66-71 Labview Based Hand Gesture Recognition for Deaf
More informationDevelopment of an Electronic Glove with Voice Output for Finger Posture Recognition
Development of an Electronic Glove with Voice Output for Finger Posture Recognition F. Wong*, E. H. Loh, P. Y. Lim, R. R. Porle, R. Chin, K. Teo and K. A. Mohamad Faculty of Engineering, Universiti Malaysia
More informationEXTENSION OF HIDDEN MARKOV MODEL FOR
EXTENSION OF HIDDEN MARKOV MODEL FOR RECOGNIZING LARGE VOCABULARY OF SIGN LANGUAGE ABSTRACT Maher Jebali and Mohamed Jemni Research Lab. LaTICE ESSTT University of Tunis Tunisia maher.jbeli@gmail.com mohamed.jemni@fst.rnu.tn
More informationAnalysis of Recognition System of Japanese Sign Language using 3D Image Sensor
Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor Yanhua Sun *, Noriaki Kuwahara**, Kazunari Morimoto *** * oo_alison@hotmail.com ** noriaki.kuwahara@gmail.com ***morix119@gmail.com
More informationHandTalker II: A Chinese Sign language Recognition and Synthesis System
HandTalker II: A Chinese Sign language Recognition and Synthesis System Wen Gao [1][2], Yiqiang Chen [1], Gaolin Fang [2], Changshui Yang [1], Dalong Jiang [1], Chunbao Ge [3], Chunli Wang [1] [1] Institute
More informationSignInstructor: An Effective Tool for Sign Language Vocabulary Learning
SignInstructor: An Effective Tool for Sign Language Vocabulary Learning Xiujuan Chai, Zhuang Liu, Yongjun Li, Fang Yin, Xilin Chen Key Lab of Intelligent Information Processing of Chinese Academy of Sciences(CAS),
More informationSign Language Recognition using Webcams
Sign Language Recognition using Webcams Overview Average person s typing speed Composing: ~19 words per minute Transcribing: ~33 words per minute Sign speaker Full sign language: ~200 words per minute
More informationOnline Speaker Adaptation of an Acoustic Model using Face Recognition
Online Speaker Adaptation of an Acoustic Model using Face Recognition Pavel Campr 1, Aleš Pražák 2, Josef V. Psutka 2, and Josef Psutka 2 1 Center for Machine Perception, Department of Cybernetics, Faculty
More informationThe 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian
The 29th Fuzzy System Symposium (Osaka, September 9-, 3) A Fuzzy Inference Method Based on Saliency Map for Prediction Mao Wang, Yoichiro Maeda 2, Yasutake Takahashi Graduate School of Engineering, University
More informationRecognizing Non-manual Signals in Filipino Sign Language
Recognizing Non-manual Signals in Filipino Sign Language Joanna Pauline Rivera, Clement Ong De La Salle University Taft Avenue, Manila 1004 Philippines joanna rivera@dlsu.edu.ph, clem.ong@delasalle.ph
More informationImage processing applications are growing rapidly. Most
RESEARCH ARTICLE Kurdish Sign Language Recognition System Abdulla Dlshad, Fattah Alizadeh Department of Computer Science and Engineering, University of Kurdistan Hewler, Erbil, Kurdistan Region - F.R.
More informationNeuromorphic convolutional recurrent neural network for road safety or safety near the road
Neuromorphic convolutional recurrent neural network for road safety or safety near the road WOO-SUP HAN 1, IL SONG HAN 2 1 ODIGA, London, U.K. 2 Korea Advanced Institute of Science and Technology, Daejeon,
More informationA Survey on Hand Gesture Recognition for Indian Sign Language
A Survey on Hand Gesture Recognition for Indian Sign Language Miss. Juhi Ekbote 1, Mrs. Mahasweta Joshi 2 1 Final Year Student of M.E. (Computer Engineering), B.V.M Engineering College, Vallabh Vidyanagar,
More informationAuslan Sign Recognition Using Computers and Gloves
Auslan Sign Recognition Using Computers and Gloves Mohammed Waleed Kadous School of Computer Science and Engineering University of New South Wales waleed@cse.unsw.edu.au http://www.cse.unsw.edu.au/~waleed/
More informationInternational Journal of Multidisciplinary Approach and Studies
A Review Paper on Language of sign Weighted Euclidean Distance Based Using Eigen Value Er. Vandana Soni*, Mr. Pratyoosh Rai** *M. Tech Scholar, Department of Computer Science, Bhabha Engineering Research
More informationFigure 1: The relation between xyz and HSV. skin color in HSV color space from the extracted skin regions. At each frame, our system tracks the face,
Extraction of Hand Features for Recognition of Sign Language Words Nobuhiko Tanibata tanibata@cv.mech.eng.osaka-u.ac.jp Yoshiaki Shirai shirai@cv.mech.eng.osaka-u.ac.jp Nobutaka Shimada shimada@cv.mech.eng.osaka-u.ac.jp
More informationSkin color detection for face localization in humanmachine
Research Online ECU Publications Pre. 2011 2001 Skin color detection for face localization in humanmachine communications Douglas Chai Son Lam Phung Abdesselam Bouzerdoum 10.1109/ISSPA.2001.949848 This
More informationHOME SCHOOL ASL 101. Lesson 1- Fingerspelling
HOME SCHOOL ASL 101 Lesson 1- Fingerspelling American Sign Language- ASL It is what most of the North American Deaf use to communicate. The Deaf use their hands, face, and body expressions to talk. Why
More informationA Review on Gesture Vocalizer
A Review on Gesture Vocalizer Deena Nath 1, Jitendra Kurmi 2, Deveki Nandan Shukla 3 1, 2, 3 Department of Computer Science, Babasaheb Bhimrao Ambedkar University Lucknow Abstract: Gesture Vocalizer is
More informationAnalysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information
Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion
More informationFacial Feature Tracking and Occlusion Recovery in American Sign Language
Boston University Computer Science Technical Report No. 2005-024 Facial Feature Tracking and Occlusion Recovery in American Sign Language Thomas J. Castelli 1, Margrit Betke 1, and Carol Neidle 2 1 Department
More informationThe Leap Motion controller: A view on sign language
The Leap Motion controller: A view on sign language Author Potter, Leigh-Ellen, Araullo, Jake, Carter, Lewis Published 2013 Conference Title The 25th Australian Computer-Human Interaction Conference DOI
More informationKatsunari Shibata and Tomohiko Kawano
Learning of Action Generation from Raw Camera Images in a Real-World-Like Environment by Simple Coupling of Reinforcement Learning and a Neural Network Katsunari Shibata and Tomohiko Kawano Oita University,
More information3. MANUAL ALPHABET RECOGNITION STSTM
Proceedings of the IIEEJ Image Electronics and Visual Computing Workshop 2012 Kuching, Malaysia, November 21-24, 2012 JAPANESE MANUAL ALPHABET RECOGNITION FROM STILL IMAGES USING A NEURAL NETWORK MODEL
More informationPupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction
Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction Marc Pomplun and Sindhura Sunkara Department of Computer Science, University of Massachusetts at Boston 100 Morrissey
More informationAction Recognition based on Hierarchical Self-Organizing Maps
Action Recognition based on Hierarchical Self-Organizing Maps Miriam Buonamente 1, Haris Dindo 1, and Magnus Johnsson 2 1 RoboticsLab, DICGIM, University of Palermo, Viale delle Scienze, Ed. 6, 90128 Palermo,
More informationResearch Proposal on Emotion Recognition
Research Proposal on Emotion Recognition Colin Grubb June 3, 2012 Abstract In this paper I will introduce my thesis question: To what extent can emotion recognition be improved by combining audio and visual
More informationExperiments In Interaction Between Wearable and Environmental Infrastructure Using the Gesture Pendant
Experiments In Interaction Between Wearable and Environmental Infrastructure Using the Gesture Pendant Daniel Ashbrook, Jake Auxier, Maribeth Gandy, and Thad Starner College of Computing and Interactive
More informationSigning for the deaf using virtual humans
Signing for the deaf using virtual humans JA Bangham, SJ Cox, M Lincoln, I Marshall University of East Anglia, Norwich {ab, s j c, ml, im}@ sys. uea. ac. uk M Tutt,M Wells TeleVial, Norwich {marcus, mark}@televirtual.
More informationA Communication tool, Mobile Application Arabic & American Sign Languages (ARSL) Sign Language (ASL) as part of Teaching and Learning
A Communication tool, Mobile Application Arabic & American Sign Languages (ARSL) Sign Language (ASL) as part of Teaching and Learning Fatima Al Dhaen Ahlia University Information Technology Dep. P.O. Box
More informationUsing $1 UNISTROKE Recognizer Algorithm in Gesture Recognition of Hijaiyah Malaysian Hand-Code
Using $ UNISTROKE Recognizer Algorithm in Gesture Recognition of Hijaiyah Malaysian Hand-Code Nazean Jomhari,2, Ahmed Nazim, Nor Aziah Mohd Daud 2, Mohd Yakub Zulkifli 2 Izzaidah Zubi & Ana Hairani 2 Faculty
More informationTwo Themes. MobileASL: Making Cell Phones Accessible to the Deaf Community. Our goal: Challenges: Current Technology for Deaf People (text) ASL
Two Themes MobileASL: Making Cell Phones Accessible to the Deaf Community MobileASL AccessComputing Alliance Advancing Deaf and Hard of Hearing in Computing Richard Ladner University of Washington ASL
More informationQuality Assessment of Human Hand Posture Recognition System Er. ManjinderKaur M.Tech Scholar GIMET Amritsar, Department of CSE
Quality Assessment of Human Hand Posture Recognition System Er. ManjinderKaur M.Tech Scholar GIMET Amritsar, Department of CSE mkwahla@gmail.com Astt. Prof. Prabhjit Singh Assistant Professor, Department
More informationA Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China
A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some
More informationINDIAN SIGN LANGUAGE RECOGNITION USING NEURAL NETWORKS AND KNN CLASSIFIERS
INDIAN SIGN LANGUAGE RECOGNITION USING NEURAL NETWORKS AND KNN CLASSIFIERS Madhuri Sharma, Ranjna Pal and Ashok Kumar Sahoo Department of Computer Science and Engineering, School of Engineering and Technology,
More informationDimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners
Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Hatice Gunes and Maja Pantic Department of Computing, Imperial College London 180 Queen
More informationTeacher/Class: Ms. Brison - ASL II. Standards Abilities Level 2. Page 1 of 5. Week Dates: Oct
Teacher/Class: Ms. Brison - ASL II Week Dates: Oct. 10-13 Standards Abilities Level 2 Objectives Finger Spelling Finger spelling of common names and places Basic lexicalized finger spelling Numbers Sentence
More informationHuman Machine Interface Using EOG Signal Analysis
Human Machine Interface Using EOG Signal Analysis Krishna Mehta 1, Piyush Patel 2 PG Student, Dept. of Biomedical, Government Engineering College, Gandhinagar, Gujarat, India 1 Assistant Professor, Dept.
More informationHAND GESTURE RECOGNITION FOR HUMAN COMPUTER INTERACTION
e-issn 2455 1392 Volume 2 Issue 5, May 2016 pp. 241 245 Scientific Journal Impact Factor : 3.468 http://www.ijcter.com HAND GESTURE RECOGNITION FOR HUMAN COMPUTER INTERACTION KUNIKA S. BARAI 1, PROF. SANTHOSH
More informationCalibration Guide for CyberGlove Matt Huenerfauth & Pengfei Lu The City University of New York (CUNY) Document Version: 4.4
Calibration Guide for CyberGlove Matt Huenerfauth & Pengfei Lu The City University of New York (CUNY) Document Version: 4.4 These directions can be used to guide the process of Manual Calibration of the
More informationA HMM-based Pre-training Approach for Sequential Data
A HMM-based Pre-training Approach for Sequential Data Luca Pasa 1, Alberto Testolin 2, Alessandro Sperduti 1 1- Department of Mathematics 2- Department of Developmental Psychology and Socialisation University
More informationCopyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and
Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and private study only. The thesis may not be reproduced elsewhere
More informationABSTRACT I. INTRODUCTION
International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2017 IJSRCSEIT Volume 2 Issue 5 ISSN : 2456-3307 An Innovative Artificial Replacement to Facilitate
More informationAnalyzing Well-Formedness of Syllables in Japanese Sign Language
Analyzing Well-Formedness of Syllables in Japanese Sign Language Satoshi Yawata Makoto Miwa Yutaka Sasaki Daisuke Hara Toyota Technological Institute 2-12-1 Hisakata, Tempaku-ku, Nagoya, Aichi, 468-8511,
More informationSupporting Arabic Sign Language Recognition with Facial Expressions
Supporting Arabic Sign Language Recognition with Facial Expressions SASLRWFE Ghada Dahy Fathy Faculty of Computers and Information Cairo University Cairo, Egypt g.dahy@fci-cu.edu.eg E.Emary Faculty of
More informationLocal Image Structures and Optic Flow Estimation
Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk
More informationCharacterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics
Human Journals Research Article October 2017 Vol.:7, Issue:4 All rights are reserved by Newman Lau Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics Keywords: hand
More informationA Wearable Hand Gloves Gesture Detection based on Flex Sensors for disabled People
A Wearable Hand Gloves Gesture Detection based on Flex Sensors for disabled People Kunal Purohit 1, Prof. Kailash Patidar 2, Mr. Rishi Singh Kushwah 3 1 M.Tech Scholar, 2 Head, Computer Science & Engineering,
More informationExperimental evaluation of the accuracy of the second generation of Microsoft Kinect system, for using in stroke rehabilitation applications
Experimental evaluation of the accuracy of the second generation of Microsoft Kinect system, for using in stroke rehabilitation applications Mohammad Hossein Saadatzi 1 Home-based Stroke Rehabilitation
More informationPerformance Analysis of different Classifiers for Chinese Sign Language Recognition
IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 11, Issue 2, Ver. II (Mar-Apr.216), PP 47-54 www.iosrjournals.org Performance Analysis
More informationIntelligent Frozen Shoulder Self-Home Rehabilitation Monitoring System
Intelligent Frozen Shoulder Self-Home Rehabilitation Monitoring System Jiann-I Pan* 1, Hui-Wen Chung 1, and Jen-Ju Huang 2 1 Department of Medical Informatics, Tzu-Chi University, Hua-Lien, Taiwan 2 Rehabilitation
More information