3. MANUAL ALPHABET RECOGNITION STSTM
|
|
- Edward Paul
- 5 years ago
- Views:
Transcription
1 Proceedings of the IIEEJ Image Electronics and Visual Computing Workshop 2012 Kuching, Malaysia, November 21-24, 2012 JAPANESE MANUAL ALPHABET RECOGNITION FROM STILL IMAGES USING A NEURAL NETWORK MODEL Makoto J. Hirayama and Masahiro Funakawa Kanazawa Institute of Technology ABSTRACT Japanese manual alphabet is used mainly by hearing impaired persons as complements of sign language. Using neural network model, a Japanese manual alphabet recognition system is implemented. Still images of finger shapes from 2 directions are used as inputs of a multi-layer perceptron and the classification of the alphabet is the output. Overall performance of recognition is around 88% in average, although they varied depending on evaluation conditions. Keywords: Japanese manual alphabet, sign language recognition, image processing, neural network model 1. INTRODUCTION To consider reasonable accommodation to persons with disability in aural speech communication, preparing accessible environment is duty on any kind of places. For hearing impairment, communication aids to assist communication between hearing impaired and hearing persons should be supplied. To show information contents, accessible presentation should be provided. Convention of the rights of persons with disabilities [1] was carried by United Nations in 2006, although Japan has not ratified so far, yet. It prohibits discrimination by disabilities and supports accessibility and participation of persons with disabilities. It is necessary to supply communication assists. To enable communication assists, technologies for communication aids must be studied an developed. In Japan, about 300 thousand hearing impaired persons in terms of counts by the ministry of welfare [2]. About 50 thousand persons are said to use sign language [3] as their usual communication method. Although supplying information and communication in sign language should be guaranteed for these people, it is difficult for most of Japanese speakers to use sign language. Therefore, we are considering making a sign language interpreter device based on sign language recognition in future. As the initial step of Japanese sign language interpreting technology, we research into Japanese manual alphabet recognition. A method for recognizing Japanese manual alphabet from images captured by two web cameras is studied to be applied to developing wellbeing system for assisting communication between hearing impaired persons. Some pattern recognition algorithms of Japanese manual alphabet have been proposed so far. For example, Watanabe et. al. [4] proposed a recognition method using color glove and single camera capturing. Mitome et. al. [5] proposed a fast speed recognition method for ten numeral finger shapes using single camera image. Some other methods use special devices such as 3D scanners [6], glove devices [7] [8], thermal sensors, etc. Also, algorithm for recognition is discussed [9] [10] [11] [12] [13] [14]. However, they have not generally completed yet. In this paper, hand areas are extracted from the original images from web cameras using color values of human skin [15]. Silhouette images of the hand representing by black and white binary format are inputs to three layer perceptron type neural network, then the Japanese manual alphabet syllable is the output. For some alphabet characters expressed by manual motions, sequential multiple still images are used as an additional information for classification. 2. JAPANESE MANUAL ALPHABET DATA 2.1. Japanese Manual Alphabet Japanese manual alphabet, which is also called as finger alphabet or finger spelling, is usually used with sign language, for expressing spoken or written Japanese language representation such as names of human, places, etc. which cannot be expressed in words in sign language. The current Japanese finger alphabet was defined by G. Osone, the principal of Osaka city deaf school, in 1931 [16], by modifying and extending American manual alphabet. Each finger alphabet character is corresponding to Japanese Kana which is syllabic unit of Japanese. The number of Japanese basic syllabic units is 46 (it is so called 50-On pronounced as Gojyu-On ), but some additional symbols are used with the basic units to express additional syllabic pronunciation symbols ( Dakuten, Han-Dakuten, Yo-On, Cho-On in Japanese). Many of Japanese finger alphabet characters are expressed by shapes of one hand without motion, but some characters and all pronunciation symbols are expressed with motion. Fig. 1 shows examples of these shapes. Only 20 finger alphabet characters, "a, i, u, e, o, ka, ki, ku, ke, ko, sa,
2 shi, su, se, so, ta, chi, tu, te, to" are shown out of 46 basic characters. The input image data to the recognition system reported in this paper are similar to them in Fig MANUAL ALPHABET RECOGNITION STSTM 3.1. System Overview Fig. 3 shows a schematic diagram of Japanese finger alphabet recognition system that was developed. The input of the system is images of motion video or still pictures from web cameras. The output of the system is a recognition result, that is, a Japanese Kana character text. The output may be connected to speech synthesis system to hear the textual sequence. Hand images from web camera Image preprocessing (Hand region extraction, Noise reduction, Binary silhouetted imaging) Fig. 1. An example of Japanese manual alphabet, "a, i, u, e, o, ka, ki, ku, ke, ko, sa, si, su, se, so." 2.2. Recognition Objects The recognition objects are 41 Japanese manual alphabet characters without motion, 4 characters with motion (/NO/, /MO/, /RI/, and /NN/), and 34 arm motions for voiced syllables, explosive syllables, palatal syllables, and long sounding marks ("Daku-on," "Handaku-on," "Yo-on," and "Cho-on" in Japanese). For expressing voiced, explosive, palatal syllables or long sounding marks, hand positions are moved by arm, from center to right, from low to up, from front to back, or up to low, all without hand shape changes during the movements. The manual alphabet images are captured by using two web cameras. The best two positions founded by previously reported camera placement study [17] [18] are used as the positions of two cameras. Fig. 2 shows example views from two web cameras. Long sleeve black or dark blue shirt was worn by subjects for the experiment for easy hand area extraction by using chroma-keying. Also, single colored back paper is used for background for easier image extractions. (a) Front view (b) Side view Fig. 2. Views from two web cameras. Characteristic parameters caculation Learned recognition engine (Neural network model) Shrinken binary image data Recognition results (Manual alphabet classification) Fig. 3. Schematic diagram of Japanese manual alphabet recognition system At first, as shown in Fig. 3, image preprocessing is done to the input image from web cameras. The most important processing at this stage is hand region extraction. After the processing, binary silhouetted image of hand is made. Then, this preprocessed data are further changed by 2 types of processing. One is characteristic parameters calculation and another is binary silhouetted image data adjustment. They will be inputs to the recognition engine.
3 As the recognition engine, a neural network model of three layer perceptron is used. This type of neural network model is often used for image recognition. Finally, the recognition engine outputs recognition results as Japanese Kana text characters. In the current system, the recognition is based on still images. Information obtained from video image in time domain is not used. Manual alphabet with motion and pronunciation symbols are recognized as sequences of still images, for example, 3 images of start, mid, and end points are recognized independently, as parts of one character. More detailed algorithms of the system are explained in the following sections Image Preprocessing Using human skin color detection in the RGB color space [15], a hand area is extracted in each picture. Simply, values are in relation of R > G > B, the pixel is a candidate of the hand area. After this extraction, using R values, the color image is converted to a gray scale image, then, using threshold value, it is converted to a black and white binary image. Then, using the Feret's diameter, a hand region is extracted Characteristic Parameters Calculation and Shrunken Binary Image Data Preparation Extracted hand region binary image data described above as image preprocessing is the base of characteristic parameters calculation. The left figure in Fig. 4 is an example. The data are further changed by 2 types of processing. One is characteristic parameters calculation and another is binary silhouetted image data adjustment. They will be inputs to the recognition engine. (2) Height and width ratio H A = H + W, where H denotes height and W denotes width. (3) Ratio of areas of silhouette to Feret's diameter D S = H W Then, the image is shrunken to a 25 x 25 pixel square for normalization of data used for recognition. Right figure in Fig. 4 is a shrunken image. Parameters (4) through (5) are calculated from this shrunken image. And (6) is 25 x 25 = 625 binary data themselves. (4) Ratio of areas of silhouette and square of left, leftcenter, center, right-center, and right regions (5 parameters per image). Fig. 5 shows these areas. A B C D Fig. 5. Black and white ratios within vertically splitted regions (5) Ratio of areas of silhouette and square of up, upmiddle, middle, middle-low, low (5 parameters per image). Fig. 6 shows these areas. E H L S shrink 25 pixel 25 pixel W Fig. 4. Shrinking black and white binary image into 25 x 25 pixel image. At first, parameters (1) through (3) are calculated using the original image before shriking. (1) Roundness 4π S C = 2 L, where S denotes area square of the silhouette and L denotes perimeter. Fig. 6. Black and white ratios within horizontally splitted regions (6) Binary pixel values (25 x 25 = 625 values per image) as shown in Fig. 7. Fig. 7. Binary pixel values (25 x 25 = 625)
4 3.4. Neural Network Model Hand shape recognition from still image is done using a three layer perceptron neural network model. For the inputs, parameters (6), that is, 625 binary pixel values, are mainly used. In addition, parameters (1) through (5) are added as input, as better performances were obtained in the experiment. The output of the network is the result of characters from /A/, /I/, /U/, /E/, /O/,, and /NN/ (41 units). The output unit of highest value is taken as the recognition result. 300 hidden layer units were used for the final evaluation, since it had the best performance in many trials of changing numbers of hidden layer units. For the simulation, MATLAB software with neural network toolbox is used Pixel Values Hidden Output Fig. 8. Three layer perceptron type neural netowrk 3.5. Hand Motion Tracking For the hand motion recognition, time series of center of mass of the silhouette in the binary images are tracked. By using information of the movements with the recognition results of still hand shape image, manual alphabet recognition including pronunciation symbols (i. e., Dakuten, Han-Dakuten, Yo-On, Cho-On in Japanese). So far, segmentation of character boundaries has not been done automatically yet and the time series of one character is supplied by human operation. Center of mass of the silhouette is calculated both for front and side views. The position representation is based on whole area of the web camera frame. From the sequence of the center of mass, movement is detected either as (1) from center to left in web camera frame for voiced syllables, (2) from up to down for long sounding marks, (3) from down to up for plosive syllables, or (4) from front to back for palatalized syllables. Fig. 9 shows the center of mass of silhouettes and movement direction axises. By adding information for the movement, the still image recognition results are separated. For example, still image of Japanese "KA" manual character with center to left movement expressing "Dakuten" is recognized as "GA". A I NN Fig. 9. Center of masses of the hand shape silhouettes and movement directions 4. SIMULATION RESULTS AND DISCUSSION The overall performance of the recognition from still images of hand pictures from 10 subjects is 88% correct in average. As the movement tracking itself is 100% correct by using simple method of subtraction of starting and ending positions, although segmentation of character boundary is not automatically but manually. The overall performance of the recognition for voiced syllables, plosive syllables, palatalized syllables, and long sounding marks, including both shape and motion recognitions are 88%, 95%, 90%, and 95% correct respectively. Using the proposed method, many Japanese manual alphabet characters can be recognized. However, higher recognition rate is needed for applying real situations. Some Japanese manual alphabet characters are very similar. For example, silhouettes of "U" and "RA" are almost same except for the index finger angle. Some additional algorithm may be required to be added for classification of similar shapes. Neural network model looked perform good job for the recognition of the current data set, however, trying other recognition algorithms or fine tuning of the current recognition engine is one thing to be done. Silhouettes of "I" and "CHI" are similar. In the initial study using one digital camera, recognition of "I" and "CHI" were around 50% so that proposed two web cameras method was good as recognition was drastically improved. However, from the viewpoint of real application, two cameras are hard to be installed into consumer personal computers or smart phone like devices. We still want to try one camera method with several enhancements, too. For motion detection, automatic tracking of objects and automatic segmentation of character boundaries, not included in this paper, should be considered in near future. The current data is lab experimentation setting data, which use single color background, stable lighting, stable camera placing, and clear behavior of Japanese manual alphabet. For the real application, robust methods for real situations and real time execution are needed, so that, serious system development with several improved algorithms and methods will be done.
5 5. CONCLUSION Algorithm and implementation of Japanese manual alphabet recognition, so far, was explained. To improve recognition results, we are trying some improvements to the recognition algorithm. When reasonable recognition rates were obtained after the improvements, applications for communication support between hearing and hearing impaired peoples will be planned to be developed. REFERENCES [15] D. Chai, K. N. Ngan: Locating facial region of a headand-shoulders color image, Proc. 3rd Int. Conf. on Automatic Face and Gesture Recognition, pp , [16] National Center of Sign Language Education: New Sign Language Classroom Introductory, Japanese Federation of Deaf, (In Japanese) [17] F. Kimoto, M. Funakawa, M. J. Hirayama, A studio of recognition method and camera position for finger spelling using two silhouette images, 71th IPSJ Annual Conference, 4T-8, pp. (2) , (In Japanese) [18] M. J. Hirayama, M. Funakawa, "Finger alphabet recognition from still images using neural network," 72th IPSJ Annual Conferece, 1D-1, pp. (2) 13-14, (In Japanese) [1] United Nations, Convention on the rights of persons with disabilities, [2] Ministry of Health, Labor and Welfare: Health, Labor and Welfare Whitepaper, Ministry of Health, Labor and Welfare, (In Japanese) [3] Japanese Federation of Deaf: New Sign Language Handbook, Sanseido:Tokyo, (In Japanese) [4] K. Watanabe, Y. Iwai, Y. Yagi, M. Yachida: Gesture Recognition Using Color Gloves, J. IEICE D-II, Vol. J80-D-II, No. 10, pp , (In Japanese) [5] A. Mitome, K. Ichige, R. Ishii: Hand Shape Analysis and Recognition by Masking and Standardization of Extracted Hand Images, J. IEICE D, Vol. J89-D, No. 6, pp , (In Japanese) [6] Y. Wang, S. Itai, S. Ono, S. Nakayama: Human recognition with ear image by principal component analysis, J. Japan Society of Information and Knowledge, 16(1), pp , (In Japanese) [7] K. Tabata, T. Kuroda: Improvement for finger alphabet recognition using distinctive features, Proceeding of the 34th Japan Sign Language Society Convention, pp. 5-6, (In Japanese) [8] M. Osato, M. Suzuki, A. Ito, S. Makino: Feature value combination for finger character recognition using a color glove, IEICE Technical Report, PRMU , pp , (In Japanese) [9] Y. Hamada, N. Shimada, Y. Shirai: Hand shape estimation using sequence of multiple viewpoint images based on transition network, J. IEICE D-II, Vol. J-85-D-II, No. 8, pp , (In Japanese) [10] A. Imai, N. Shimada, Y. Shirai,: 3-D hand posture recognition by learning contour variation, J. IEICE D-I, Vol. J88-D-II, No. 8, pp , (In Japanese) [11] T. Morozumi: Recognition of manual kana using characteristics in Images, Takushoku University Technical Report, Vol. 9, No. 2, (In Japanese) [12] S. Iwasaki, T. Asakura, K. Hirose: Recognition of finger spelling using neural network, Japan Society of Mechanical Engineers Hokuriku-Shinetsu branch 34th convention, pp , (In Japanese) [13] M. Osato, M. Suzuki, A. Ito, S. Makino: An interpolation method of the feature vector for finger character recognition, 2006 IEICE Convention, A-19-15, p. 333, (In Japanese) [14] D. Hara, Y. Nagashima, A. Ichikawa, K. Kanda, M. Terauchi, K. Morimoto, Y. Shirai, Y. Horiuchi, K. Nakazono: The development of sindex V.3: Collaboration between engineering and linguistics in abstracting distinctive features of Japanese sign, Human Interface Symposium 2007, pp , (In Japanese)
doi: / _59(
doi: 10.1007/978-3-642-39188-0_59(http://dx.doi.org/10.1007/978-3-642-39188-0_59) Subunit modeling for Japanese sign language recognition based on phonetically depend multi-stream hidden Markov models
More informationTWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING
134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty
More informationProsody Rule for Time Structure of Finger Braille
Prosody Rule for Time Structure of Finger Braille Manabi Miyagi 1-33 Yayoi-cho, Inage-ku, +81-43-251-1111 (ext. 3307) miyagi@graduate.chiba-u.jp Yasuo Horiuchi 1-33 Yayoi-cho, Inage-ku +81-43-290-3300
More informationSign Language to English (Slate8)
Sign Language to English (Slate8) App Development Nathan Kebe El Faculty Advisor: Dr. Mohamad Chouikha 2 nd EECS Day April 20, 2018 Electrical Engineering and Computer Science (EECS) Howard University
More informationGesture Recognition using Marathi/Hindi Alphabet
Gesture Recognition using Marathi/Hindi Alphabet Rahul Dobale ¹, Rakshit Fulzele², Shruti Girolla 3, Seoutaj Singh 4 Student, Computer Engineering, D.Y. Patil School of Engineering, Pune, India 1 Student,
More informationDevelopment of an Electronic Glove with Voice Output for Finger Posture Recognition
Development of an Electronic Glove with Voice Output for Finger Posture Recognition F. Wong*, E. H. Loh, P. Y. Lim, R. R. Porle, R. Chin, K. Teo and K. A. Mohamad Faculty of Engineering, Universiti Malaysia
More informationFinger spelling recognition using distinctive features of hand shape
Finger spelling recognition using distinctive features of hand shape Y Tabata 1 and T Kuroda 2 1 Faculty of Medical Science, Kyoto College of Medical Science, 1-3 Imakita Oyama-higashi, Sonobe, Nantan,
More informationImplementation of image processing approach to translation of ASL finger-spelling to digital text
Rochester Institute of Technology RIT Scholar Works Articles 2006 Implementation of image processing approach to translation of ASL finger-spelling to digital text Divya Mandloi Kanthi Sarella Chance Glenn
More informationReal Time Sign Language Processing System
Real Time Sign Language Processing System Dibyabiva Seth (&), Anindita Ghosh, Ariruna Dasgupta, and Asoke Nath Department of Computer Science, St. Xavier s College (Autonomous), Kolkata, India meetdseth@gmail.com,
More informationeasy read Your rights under THE accessible InformatioN STandard
easy read Your rights under THE accessible InformatioN STandard Your Rights Under The Accessible Information Standard 2 1 Introduction In July 2015, NHS England published the Accessible Information Standard
More informationeasy read Your rights under THE accessible InformatioN STandard
easy read Your rights under THE accessible InformatioN STandard Your Rights Under The Accessible Information Standard 2 Introduction In June 2015 NHS introduced the Accessible Information Standard (AIS)
More informationN RISCE 2K18 ISSN International Journal of Advance Research and Innovation
The Computer Assistance Hand Gesture Recognition system For Physically Impairment Peoples V.Veeramanikandan(manikandan.veera97@gmail.com) UG student,department of ECE,Gnanamani College of Technology. R.Anandharaj(anandhrak1@gmail.com)
More informationAnalysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information
Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion
More informationEffect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face
Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face Yasunari Yoshitomi 1, Sung-Ill Kim 2, Takako Kawano 3 and Tetsuro Kitazoe 1 1:Department of
More information1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1
1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present
More informationAnalysis of Recognition System of Japanese Sign Language using 3D Image Sensor
Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor Yanhua Sun *, Noriaki Kuwahara**, Kazunari Morimoto *** * oo_alison@hotmail.com ** noriaki.kuwahara@gmail.com ***morix119@gmail.com
More informationContour-based Hand Pose Recognition for Sign Language Recognition
Contour-based Hand Pose Recognition for Sign Language Recognition Mika Hatano, Shinji Sako, Tadashi Kitamura Graduate School of Engineering, Nagoya Institute of Technology {pia, sako, kitamura}@mmsp.nitech.ac.jp
More informationLabview Based Hand Gesture Recognition for Deaf and Dumb People
International Journal of Engineering Science Invention (IJESI) ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 7 Issue 4 Ver. V April 2018 PP 66-71 Labview Based Hand Gesture Recognition for Deaf
More informationThe Sign2 Project Digital Translation of American Sign- Language to Audio and Text
The Sign2 Project Digital Translation of American Sign- Language to Audio and Text Fitzroy Lawrence, Jr. Advisor: Dr. Chance Glenn, The Center for Advanced Technology Development Rochester Institute of
More informationTURKISH SIGN LANGUAGE RECOGNITION USING HIDDEN MARKOV MODEL
TURKISH SIGN LANGUAGE RECOGNITION USING HIDDEN MARKOV MODEL Kakajan Kakayev 1 and Ph.D. Songül Albayrak 2 1,2 Department of Computer Engineering, Yildiz Technical University, Istanbul, Turkey kkakajan@gmail.com
More informationSkin color detection for face localization in humanmachine
Research Online ECU Publications Pre. 2011 2001 Skin color detection for face localization in humanmachine communications Douglas Chai Son Lam Phung Abdesselam Bouzerdoum 10.1109/ISSPA.2001.949848 This
More informationAutomated Detection of Vascular Abnormalities in Diabetic Retinopathy using Morphological Entropic Thresholding with Preprocessing Median Fitter
IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 3 September 2014 ISSN(online) : 2349-784X Automated Detection of Vascular Abnormalities in Diabetic Retinopathy using Morphological
More informationIDENTIFICATION OF REAL TIME HAND GESTURE USING SCALE INVARIANT FEATURE TRANSFORM
Research Article Impact Factor: 0.621 ISSN: 2319507X INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK IDENTIFICATION OF REAL TIME
More informationAgitation sensor based on Facial Grimacing for improved sedation management in critical care
Agitation sensor based on Facial Grimacing for improved sedation management in critical care The 2 nd International Conference on Sensing Technology ICST 2007 C. E. Hann 1, P Becouze 1, J. G. Chase 1,
More informationSign Language Recognition using Webcams
Sign Language Recognition using Webcams Overview Average person s typing speed Composing: ~19 words per minute Transcribing: ~33 words per minute Sign speaker Full sign language: ~200 words per minute
More informationBuilding an Application for Learning the Finger Alphabet of Swiss German Sign Language through Use of the Kinect
Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 2014 Building an Application for Learning the Finger Alphabet of Swiss German
More informationAudiovisual to Sign Language Translator
Technical Disclosure Commons Defensive Publications Series July 17, 2018 Audiovisual to Sign Language Translator Manikandan Gopalakrishnan Follow this and additional works at: https://www.tdcommons.org/dpubs_series
More informationADVANCES in NATURAL and APPLIED SCIENCES
ADVANCES in NATURAL and APPLIED SCIENCES ISSN: 1995-0772 Published BYAENSI Publication EISSN: 1998-1090 http://www.aensiweb.com/anas 2017 May 11(7): pages 166-171 Open Access Journal Assistive Android
More informationA Lip Reading Application on MS Kinect Camera
A Lip Reading Application on MS Kinect Camera Alper Yargıç, Muzaffer Doğan Computer Engineering Department Anadolu University Eskisehir, Turkey {ayargic,muzafferd}@anadolu.edu.tr Abstract Hearing-impaired
More informationUsing Deep Convolutional Networks for Gesture Recognition in American Sign Language
Using Deep Convolutional Networks for Gesture Recognition in American Sign Language Abstract In the realm of multimodal communication, sign language is, and continues to be, one of the most understudied
More informationPAPER REVIEW: HAND GESTURE RECOGNITION METHODS
PAPER REVIEW: HAND GESTURE RECOGNITION METHODS Assoc. Prof. Abd Manan Ahmad 1, Dr Abdullah Bade 2, Luqman Al-Hakim Zainal Abidin 3 1 Department of Computer Graphics and Multimedia, Faculty of Computer
More informationAvailable online at ScienceDirect. Procedia Technology 24 (2016 )
Available online at www.sciencedirect.com ScienceDirect Procedia Technology 24 (2016 ) 1068 1073 International Conference on Emerging Trends in Engineering, Science and Technology (ICETEST - 2015) Improving
More informationA Survey on Hand Gesture Recognition for Indian Sign Language
A Survey on Hand Gesture Recognition for Indian Sign Language Miss. Juhi Ekbote 1, Mrs. Mahasweta Joshi 2 1 Final Year Student of M.E. (Computer Engineering), B.V.M Engineering College, Vallabh Vidyanagar,
More informationAn Approach to Hand Gesture Recognition for Devanagari Sign Language using Image Processing Tool Box
An Approach to Hand Gesture Recognition for Devanagari Sign Language using Image Processing Tool Box Prof. Abhijit V. Warhade 1 Prof. Pranali K. Misal 2 Assistant Professor, Dept. of E & C Engineering
More informationAUTOMATIC ACNE QUANTIFICATION AND LOCALISATION FOR MEDICAL TREATMENT
AUTOMATIC ACNE QUANTIFICATION AND LOCALISATION FOR MEDICAL TREATMENT Watcharaporn Sitsawangsopon (#1), Maetawee Juladash (#2), Bunyarit Uyyanonvara (#3) (#) School of ICT, Sirindhorn International Institute
More informationDetection and Recognition of Sign Language Protocol using Motion Sensing Device
Detection and Recognition of Sign Language Protocol using Motion Sensing Device Rita Tse ritatse@ipm.edu.mo AoXuan Li P130851@ipm.edu.mo Zachary Chui MPI-QMUL Information Systems Research Centre zacharychui@gmail.com
More informationKatsunari Shibata and Tomohiko Kawano
Learning of Action Generation from Raw Camera Images in a Real-World-Like Environment by Simple Coupling of Reinforcement Learning and a Neural Network Katsunari Shibata and Tomohiko Kawano Oita University,
More informationBroadcasting live and on demand relayed in Japanese Local Parliaments. Takeshi Usuba (Kaigirokukenkyusho Co.,Ltd)
Provision of recording services to Local parliaments Parliament and in publishing. proper equipment and so on, much like any other major stenography company in Japan. that occurred in stenography since
More informationA Review on Feature Extraction for Indian and American Sign Language
A Review on Feature Extraction for Indian and American Sign Language Neelam K. Gilorkar, Manisha M. Ingle Department of Electronics & Telecommunication, Government College of Engineering, Amravati, India
More informationIdentification of Sickle Cells using Digital Image Processing. Academic Year Annexure I
Academic Year 2014-15 Annexure I 1. Project Title: Identification of Sickle Cells using Digital Image Processing TABLE OF CONTENTS 1.1 Abstract 1-1 1.2 Motivation 1-1 1.3 Objective 2-2 2.1 Block Diagram
More informationSign Language in the Intelligent Sensory Environment
Sign Language in the Intelligent Sensory Environment Ákos Lisztes, László Kővári, Andor Gaudia, Péter Korondi Budapest University of Science and Technology, Department of Automation and Applied Informatics,
More informationFacial expression recognition with spatiotemporal local descriptors
Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box
More informationDeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation
DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation Biyi Fang Michigan State University ACM SenSys 17 Nov 6 th, 2017 Biyi Fang (MSU) Jillian Co (MSU) Mi Zhang
More informationFinger Braille teaching system-teaching of prosody of finger Braille
Feb. 2009, Volume 6, No.2 (Serial No.51) Journal of Communication and Computer, ISSN1548-7709, USA Finger Braille teaching system-teaching of prosody of finger Braille Yasuhiro MATSUDA, Tsuneshi ISOMURA
More informationFACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS Ayako KATOH*, Yasuhiro FUKUI**
More informationA Smart Texting System For Android Mobile Users
A Smart Texting System For Android Mobile Users Pawan D. Mishra Harshwardhan N. Deshpande Navneet A. Agrawal Final year I.T Final year I.T J.D.I.E.T Yavatmal. J.D.I.E.T Yavatmal. Final year I.T J.D.I.E.T
More informationInternational Journal of Software and Web Sciences (IJSWS)
International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) ISSN (Print): 2279-0063 ISSN (Online): 2279-0071 International
More informationHand Gestures Recognition System for Deaf, Dumb and Blind People
Hand Gestures Recognition System for Deaf, Dumb and Blind People Channaiah Chandana K 1, Nikhita K 2, Nikitha P 3, Bhavani N K 4, Sudeep J 5 B.E. Student, Dept. of Information Science & Engineering, NIE-IT,
More informationAssistant Professor, PG and Research Department of Computer Applications, Sacred Heart College (Autonomous), Tirupattur, Vellore, Tamil Nadu, India
International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2018 IJSRCSEIT Volume 3 Issue 7 ISSN : 2456-3307 Collaborative Learning Environment Tool In E-Learning
More informationHAND GESTURE RECOGNITION FOR HUMAN COMPUTER INTERACTION
e-issn 2455 1392 Volume 2 Issue 5, May 2016 pp. 241 245 Scientific Journal Impact Factor : 3.468 http://www.ijcter.com HAND GESTURE RECOGNITION FOR HUMAN COMPUTER INTERACTION KUNIKA S. BARAI 1, PROF. SANTHOSH
More informationSmart Gloves for Hand Gesture Recognition and Translation into Text and Audio
Smart Gloves for Hand Gesture Recognition and Translation into Text and Audio Anshula Kumari 1, Rutuja Benke 1, Yasheseve Bhat 1, Amina Qazi 2 1Project Student, Department of Electronics and Telecommunication,
More informationSpeech to Text Wireless Converter
Speech to Text Wireless Converter Kailas Puri 1, Vivek Ajage 2, Satyam Mali 3, Akhil Wasnik 4, Amey Naik 5 And Guided by Dr. Prof. M. S. Panse 6 1,2,3,4,5,6 Department of Electrical Engineering, Veermata
More informationSemi-automatic Thyroid Area Measurement Based on Ultrasound Image
Semi-automatic Thyroid Area Measurement Based on Ultrasound Image Eko Supriyanto, Nik M Arif, Akmal Hayati Rusli, Nasrul Humaimi Advanced Diagnostics and Progressive Human Care Research Group Research
More informationDate: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information:
Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information: accessibility@cisco.com Summary Table - Voluntary Product Accessibility Template Criteria Supporting Features Remarks
More informationCommunication Interface for Mute and Hearing Impaired People
Communication Interface for Mute and Hearing Impaired People *GarimaRao,*LakshNarang,*Abhishek Solanki,*Kapil Singh, Mrs.*Karamjit Kaur, Mr.*Neeraj Gupta. *Amity University Haryana Abstract - Sign language
More informationRequirements for Maintaining Web Access for Hearing-Impaired Individuals
Requirements for Maintaining Web Access for Hearing-Impaired Individuals Daniel M. Berry 2003 Daniel M. Berry WSE 2001 Access for HI Requirements for Maintaining Web Access for Hearing-Impaired Individuals
More informationImage processing applications are growing rapidly. Most
RESEARCH ARTICLE Kurdish Sign Language Recognition System Abdulla Dlshad, Fattah Alizadeh Department of Computer Science and Engineering, University of Kurdistan Hewler, Erbil, Kurdistan Region - F.R.
More informationJapanese sign-language recognition based on gesture primitives using acceleration sensors and datagloves
Japanese sign-language recognition based on gesture primitives using acceleration sensors and datagloves Hideyuki Sawada, Takuto Notsu and Shuji Hashimoto Department of Applied Physics, School of Science
More informationHand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction
Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction Aswathy M 1, Heera Narayanan 2, Surya Rajan 3, Uthara P M 4, Jeena Jacob 5 UG Students, Dept. of ECE, MBITS, Nellimattom,
More informationResearch Proposal on Emotion Recognition
Research Proposal on Emotion Recognition Colin Grubb June 3, 2012 Abstract In this paper I will introduce my thesis question: To what extent can emotion recognition be improved by combining audio and visual
More informationFigure 1: The relation between xyz and HSV. skin color in HSV color space from the extracted skin regions. At each frame, our system tracks the face,
Extraction of Hand Features for Recognition of Sign Language Words Nobuhiko Tanibata tanibata@cv.mech.eng.osaka-u.ac.jp Yoshiaki Shirai shirai@cv.mech.eng.osaka-u.ac.jp Nobutaka Shimada shimada@cv.mech.eng.osaka-u.ac.jp
More informationNoise-Robust Speech Recognition Technologies in Mobile Environments
Noise-Robust Speech Recognition echnologies in Mobile Environments Mobile environments are highly influenced by ambient noise, which may cause a significant deterioration of speech recognition performance.
More informationQuestion 1 Multiple Choice (8 marks)
Philadelphia University Student Name: Faculty of Engineering Student Number: Dept. of Computer Engineering First Exam, First Semester: 2015/2016 Course Title: Neural Networks and Fuzzy Logic Date: 19/11/2015
More informationDesigning Interactive Graphical Interfaces for Teaching Japanese Manual Alphabet
TCT Education of Disabilities, 2005 Vol. 4 (1) Designing Interactive Graphical Interfaces for Teaching Japanese Manual Alphabet Miki Namatame1*, Yasushi Harada2), Fusako Kusunoki2) and Takao Terano3) ^Department
More informationSmart Speaking Gloves for Speechless
Smart Speaking Gloves for Speechless Bachkar Y. R. 1, Gupta A.R. 2 & Pathan W.A. 3 1,2,3 ( E&TC Dept., SIER Nasik, SPP Univ. Pune, India) Abstract : In our day to day life, we observe that the communication
More informationRecognition of sign language gestures using neural networks
Recognition of sign language gestures using neural s Peter Vamplew Department of Computer Science, University of Tasmania GPO Box 252C, Hobart, Tasmania 7001, Australia vamplew@cs.utas.edu.au ABSTRACT
More informationINTELLIGENT LIP READING SYSTEM FOR HEARING AND VOCAL IMPAIRMENT
INTELLIGENT LIP READING SYSTEM FOR HEARING AND VOCAL IMPAIRMENT R.Nishitha 1, Dr K.Srinivasan 2, Dr V.Rukkumani 3 1 Student, 2 Professor and Head, 3 Associate Professor, Electronics and Instrumentation
More informationABSTRACT I. INTRODUCTION
2018 IJSRSET Volume 4 Issue 2 Print ISSN: 2395-1990 Online ISSN : 2394-4099 National Conference on Advanced Research Trends in Information and Computing Technologies (NCARTICT-2018), Department of IT,
More informationHand Gesture Recognition: Sign to Voice System (S2V)
Hand Gesture Recognition: Sign to Voice System (S2V) Oi Mean Foong, Tan Jung Low, and Satrio Wibowo Abstract Hand gesture is one of the typical methods used in sign language for non-verbal communication
More informationModeling the Use of Space for Pointing in American Sign Language Animation
Modeling the Use of Space for Pointing in American Sign Language Animation Jigar Gohel, Sedeeq Al-khazraji, Matt Huenerfauth Rochester Institute of Technology, Golisano College of Computing and Information
More informationComparison of Lip Image Feature Extraction Methods for Improvement of Isolated Word Recognition Rate
, pp.57-61 http://dx.doi.org/10.14257/astl.2015.107.14 Comparison of Lip Image Feature Extraction Methods for Improvement of Isolated Word Recognition Rate Yong-Ki Kim 1, Jong Gwan Lim 2, Mi-Hye Kim *3
More informationVoluntary Product Accessibility Template (VPAT)
Avaya Vantage TM Basic for Avaya Vantage TM Voluntary Product Accessibility Template (VPAT) Avaya Vantage TM Basic is a simple communications application for the Avaya Vantage TM device, offering basic
More informationAcoustic Sensing With Artificial Intelligence
Acoustic Sensing With Artificial Intelligence Bowon Lee Department of Electronic Engineering Inha University Incheon, South Korea bowon.lee@inha.ac.kr bowon.lee@ieee.org NVIDIA Deep Learning Day Seoul,
More informationDesign of Palm Acupuncture Points Indicator
Design of Palm Acupuncture Points Indicator Wen-Yuan Chen, Shih-Yen Huang and Jian-Shie Lin Abstract The acupuncture points are given acupuncture or acupressure so to stimulate the meridians on each corresponding
More informationDevelopment of Communication Support System using Lip Reading
ISCA Archive http://www.isca-speech.org/archive Auditory-Visual Speech Processing (AVSP) 211 Volterra, Italy September 1-2, 211 Development of Communication Support System using Lip Reading Takeshi Saitoh
More informationAnnotation and Retrieval System Using Confabulation Model for ImageCLEF2011 Photo Annotation
Annotation and Retrieval System Using Confabulation Model for ImageCLEF2011 Photo Annotation Ryo Izawa, Naoki Motohashi, and Tomohiro Takagi Department of Computer Science Meiji University 1-1-1 Higashimita,
More informationAccessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)
Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Matt Huenerfauth Raja Kushalnagar Rochester Institute of Technology DHH Auditory Issues Links Accents/Intonation Listening
More informationGlove for Gesture Recognition using Flex Sensor
Glove for Gesture Recognition using Flex Sensor Mandar Tawde 1, Hariom Singh 2, Shoeb Shaikh 3 1,2,3 Computer Engineering, Universal College of Engineering, Kaman Survey Number 146, Chinchoti Anjur Phata
More informationSupport. Equip. Empower. Assessing and Accommodating Vision and Hearing Injuries
Assessing and Accommodating Vision and Hearing Injuries Webinar Series Overview Session 1: CAP and the DoDI 6025.22 Mandates of DoDI Overview of CAP services Session 2: Assistive Technologies and Needs
More informationDirector of Testing and Disability Services Phone: (706) Fax: (706) E Mail:
Angie S. Baker Testing and Disability Services Director of Testing and Disability Services Phone: (706)737 1469 Fax: (706)729 2298 E Mail: tds@gru.edu Deafness is an invisible disability. It is easy for
More informationSpeaking System For Mute
IOSR Journal of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 Volume 5, PP 31-36 www.iosrjen.org Speaking System For Mute Padmakshi Bhat 1, Aamir Ansari 2, Sweden D silva 3, Abhilasha
More informationHow Hearing Impaired People View Closed Captions of TV Commercials Measured By Eye-Tracking Device
How Hearing Impaired People View Closed Captions of TV Commercials Measured By Eye-Tracking Device Takahiro Fukushima, Otemon Gakuin University, Japan Takashi Yasuda, Dai Nippon Printing Co., Ltd., Japan
More informationRecognition of Tamil Sign Language Alphabet using Image Processing to aid Deaf-Dumb People
Available online at www.sciencedirect.com Procedia Engineering 30 (2012) 861 868 International Conference on Communication Technology and System Design 2011 Recognition of Tamil Sign Language Alphabet
More informationExtraction of Blood Vessels and Recognition of Bifurcation Points in Retinal Fundus Image
International Journal of Research Studies in Science, Engineering and Technology Volume 1, Issue 5, August 2014, PP 1-7 ISSN 2349-4751 (Print) & ISSN 2349-476X (Online) Extraction of Blood Vessels and
More informationInternational Journal of Advance Engineering and Research Development. Gesture Glove for American Sign Language Representation
Scientific Journal of Impact Factor (SJIF): 4.14 International Journal of Advance Engineering and Research Development Volume 3, Issue 3, March -2016 Gesture Glove for American Sign Language Representation
More informationQuality Assessment of Human Hand Posture Recognition System Er. ManjinderKaur M.Tech Scholar GIMET Amritsar, Department of CSE
Quality Assessment of Human Hand Posture Recognition System Er. ManjinderKaur M.Tech Scholar GIMET Amritsar, Department of CSE mkwahla@gmail.com Astt. Prof. Prabhjit Singh Assistant Professor, Department
More informationCancer Cells Detection using OTSU Threshold Algorithm
Cancer Cells Detection using OTSU Threshold Algorithm Nalluri Sunny 1 Velagapudi Ramakrishna Siddhartha Engineering College Mithinti Srikanth 2 Velagapudi Ramakrishna Siddhartha Engineering College Kodali
More informationCommunication. Jess Walsh
Communication Jess Walsh Introduction. Douglas Bank is a home for young adults with severe learning disabilities. Good communication is important for the service users because it s easy to understand the
More informationMicrophone Input LED Display T-shirt
Microphone Input LED Display T-shirt Team 50 John Ryan Hamilton and Anthony Dust ECE 445 Project Proposal Spring 2017 TA: Yuchen He 1 Introduction 1.2 Objective According to the World Health Organization,
More informationCharacterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics
Human Journals Research Article October 2017 Vol.:7, Issue:4 All rights are reserved by Newman Lau Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics Keywords: hand
More informationVideo-Based Finger Spelling Recognition for Ethiopian Sign Language Using Center of Mass and Finite State of Automata
Contemporary Issues Summit, Harvard, Boston, USA, March 2017, Vol. 13, No. 1 ISSN: 2330-1236 Video-Based Finger Spelling Recognition for Ethiopian Sign Language Using Center of Mass and Finite State of
More informationEmbedded Based Hand Talk Assisting System for Dumb Peoples on Android Platform
Embedded Based Hand Talk Assisting System for Dumb Peoples on Android Platform R. Balakrishnan 1, Santosh BK 2, Rahul H 2, Shivkumar 2, Sunil Anthony 2 Assistant Professor, Department of Electronics and
More informationLearning Objectives. AT Goals. Assistive Technology for Sensory Impairments. Review Course for Assistive Technology Practitioners & Suppliers
Assistive Technology for Sensory Impairments Review Course for Assistive Technology Practitioners & Suppliers Learning Objectives Define the purpose of AT for persons who have sensory impairment Identify
More informationBefore the Department of Transportation, Office of the Secretary Washington, D.C
Before the Department of Transportation, Office of the Secretary Washington, D.C. 20554 ) In the Matter of ) Accommodations for Individuals Who Are ) OST Docket No. 2006-23999 Deaf, Hard of Hearing, or
More informationNote: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.
Date: 26 June 2017 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s CX5100 Unified Conference Station against the criteria
More informationMobile Speech-to-Text Captioning Services: An Accommodation in STEM Laboratory Courses
Mobile Speech-to-Text Captioning Services: An Accommodation in STEM Laboratory Courses Michael Stinson, Pamela Francis, and, Lisa Elliot National Technical Institute for the Deaf Paper presented at the
More information1 Pattern Recognition 2 1
1 Pattern Recognition 2 1 3 Perceptrons by M.L. Minsky and S.A. Papert (1969) Books: 4 Pattern Recognition, fourth Edition (Hardcover) by Sergios Theodoridis, Konstantinos Koutroumbas Publisher: Academic
More informationBrain Tumor Detection using Watershed Algorithm
Brain Tumor Detection using Watershed Algorithm Dawood Dilber 1, Jasleen 2 P.G. Student, Department of Electronics and Communication Engineering, Amity University, Noida, U.P, India 1 P.G. Student, Department
More informationTips When Meeting A Person Who Has A Disability
Tips When Meeting A Person Who Has A Disability Many people find meeting someone with a disability to be an awkward experience because they are afraid they will say or do the wrong thing; perhaps you are
More informationRecognition of Hand Gestures by ASL
Recognition of Hand Gestures by ASL A. A. Bamanikar Madhuri P. Borawake Swati Bhadkumbhe Abstract - Hand Gesture Recognition System project will design and build a man-machine interface using a video camera
More information