A Portable Tool for Deaf and Hearing Impaired People

Size: px
Start display at page:

Download "A Portable Tool for Deaf and Hearing Impaired People"

Transcription

1 A Portable Tool for Deaf and Hearing Impaired People R. A. D. K. Rupasinghe, D. C. R. Ailapperuma, P.M.B.N.E. De Silva and A. K. G. Siriwardana, B.H. Sudantha Faculty of Information Technology University of Moratuwa Katubedda, Sri Lanka Abstract A portable system for deaf and hearing impaired people has been developed based on the IEEE 1451 sensor standards and other related technologies to minimize their communication difficulties with the ordinary society. The system is comprised of two main modules. One of the modules is performing gesture recognition capturing Sinhala sign language symbols from a deaf or hearing impaired person using a specially developed smart glove which is embedded with a 3-D accelerometer, a digital gyroscope and flex sensors complying with the IEEE Smart Sensor Standards. The smart glove is capable of capturing finger movements and hand movements. Arduino Mega controller system, containing Atmel ATmega2560 AVR microcontroller, was used to implement the Smart Transducer Interface Module (STIM). The captured signals were analyzed and related text and voice would be generated. The other module is processing human voice real-time using Sphinix- 4 framework and generated corresponding text and displaying Sinhala sign language animations on an external display using an animated avatar. Generating animations was done by using Blender and JMonkey game engine. Index Terms deaf and hearing impaired, IEEE 1451, Sinhala sign language, smart data glove, 3-D accelerometer, digital gyroscope, flex sensors, embedded systems, STIM, smart sensors, blender, JMonkey I. INTRODUCTION On the Earth, people are equally treated and it is their right to receive equal status without any discrimination [1]. Notwithstanding ordinary people in the World, all deaf and hearing impaired persons also need to communicate with other people and they also have right to be treated equally [2, 3]. Many people today use current technologies to make their lives easier. Disabled people are not an exception to this reality. Over 5% of the World s population, nearly 360 million people, has some kind of hearing loss [4]. Many of them are living in middle and low income countries and they are at the very least understanding of their special needs and requirements [4]. They need today s technology to help them cope with their lives on a daily basis. Deaf or hearingimpaired people sometimes find it hard to communicate with the ordinary people and they do not aware sounds from their surroundings especially when they are travelling. Having a portable device that makes them aware of their surroundings and facilitate ability to communicate with the society are the main goals that proposed system planned to achieve. Proposed system has two main sub systems named Gesture (sign language) recognition system and Voice processing system. With the use of gesture recognition system we are enabling the person to talk by processing the hand gestures (sign language) of the deaf person and speak out the content. With the voice processing system we are enabling the hearing impaired person to see or feel the voices around him by converting voice to Sinhala sign language. II. PROBLEM IN BRIEF Hearing impairment, hearing loss, or deafness, is a partial or total inability to hear. It is one of the most common medical conditions presenting to physicians. Disabling hearing loss is claimed that hearing loss greater than 40dB in adults and hearing loss greater than 30dB in children [4]. Adult-onset hearing loss ranks 15th amongst the leading causes of the Global Burden of Disease (GBD [5].It is caused by many different factors, including, but not limited to, aging, exposure to noise, illness, or chemicals and physical trauma or any combination of these. Communication with people is a basic need for everyone, but deaf and hearing impaired people are not able to communicate well due to some disabilities and communication is the main problem among them. Deaf people communicate visually and physically rather than audibly. Therefore, they have some problems in their relationship with people. The sign languages are the languages used by communities of deaf people all over the world. Sign languages are not derived from spoken languages; they have their own independent vocabularies and their own grammatical structures. Sign languages exhibit the full range of expression that spoken languages afford their users [6]. Usually, deaf people use sign-language to communicate with each other, but people have no desire to learn sign-language. For this reason, many people feel awkward or become frustrated trying to communicate with deaf people, especially when no interpreter is available.

2 There exist a lot of services running into the voice platforms which are not accessed by deaf and hearing impaired people. Because of this, there are important communication barriers between a deaf person and society. The system we are proposing to develop has been address the above issue of communication problem among deaf and hearing impaired people with the society, also will be an innovative solution in the interaction between people and machine area for deaf and hearing impaired people. III. RELATED WORK There are many similar approaches taken to address the above mentioned issues using vivid technologies and methodologies. In the recent past, there has been many research works on the hand sign recognition. The technology of gesture recognition can be divided into two categories. Glove Based Techniques Vision Based Techniques The Sound of Signing, Magic Gloves and Hand Sign Interpreter [7, 8, 9] are some example researches under glove based techniques. The sound of signing is a sign language translator which uses a glove fitted with sensors that can interpret the 26 English letters in American Sign Language (ASL). The glove is composed with flex sensors, contact sensors, and accelerometers in three dimensions to gather data on each finger s position and the hand s motion to differentiate the letters. The is transferred to the base station, which displays as well as speaks the letter and also interfaces with a computer. In Magic Gloves Wireless data gloves with flex sensors are used to identify the gestures. The resulting digital signals from flex sensors are encoded and transmitted through radio frequency system. Radio frequency receivers receive the signal and fed to the gesture recognition section through the decoder. Then the gesture is recognized and the corresponding text information is identified. Text to speech conversion takes place in the voice section and play out through the speaker. Hand Sign Interpreter is a prototype which can store 10 different signals of as per the user s hand sign. Three of them are totally based on the output of the accelerometer and seven of them are based on the output of flex sensors. The processor analyses the signal from sensor glove and fetches the corresponding audio signal from the memory IC which is fed to the amplifier. The speaker generates the relevant sound. In vision based methods, camera is the input device for observing the information of hands or fingers. icommunicator, Microsoft Kinect Sign language translator and ASL Fingerspelling Translator Glove [10, 11, 12] are two examples for vision based methods. Microsoft Kinect Sign language translator aimed to realize fast and accurate 3D sign language recognition based on the depth and color images captured by Kinect. It also provides two way communications between deaf person and normal person. But it is developed only for Chinese sign language and it is not portable. icommunicater is a commercial software developed for windows platform. icommunicator translates in real-time speech/text to video sign-language and text to voice. The icommunicator provides American Sign Language (ASL) signs in English word order (subject+verb+object) to improve the relationship between spoken, written, and signed words, promoting better literacy levels than traditional ASL. TABLE I. COMPARISON BETWEEN SIMILAR APPROACHES Feature A B C D E Sign-to-voice 1 Voice-to-text 2 Text-to-sign 3 Voice-to-sign 4 5 Portable Recognize words in 6 sign A. Microsft Kinect SL Translater B. icommunicater C. Magic Gloves D. Hand Sign Interpreter E. The sound of Signing IV. OUR APPROACH To solve the above-discussed problem we came up with the solution of a portable tool where it integrates many features for deaf and hearing impaired people to make their communication with the normal society. Our system consists of two applications; 1. Gesture Recognition System 2. Voice Processing System Gesture recognition system converts Sinhala sign language symbols to voice. Capturing sign language signals is done by using a sensor glove. This glove is compromised a complex embedded system, that would capture the variations of the hand and fingers and the movements of the hand. That would help to capture most of different gestures of sign language without using cameras for detection. This is one of the most suitable ways of detecting gestures of sign language in a portable system. Voice processing system inputs human voice using microphone array. Finally system will output the animated avatar after the process of voice recognition and this will be displayed on an external display. Figure 1 illustrates top level architecture of the proposed solution. Figure 1. Top Level Architecture of the System

3 V. DESIGN OF THE SYSTEM The proposed system mainly consists of the following components; A. Gesture Recognition System B. Voice Processing System A. Gesture Recognition System Design of the gesture recognition system mainly consists of the main hardware module, the smart data glove. It consists of a 3-D accelerometer, a digital gyroscope and ten flex sensors and few limit switches. All the sensors are connected to the system complying with the IEEE 1451 smart sensor standards. The IEEE 1451-Standards for Smart Transducer Interface for Sensors and Actuators defines a set of common communication interfaces for connecting transducers to embedded systems and field networks in a networkindependent environment. The IEEE 1451 standard makes it easier for system designers and transducer manufacturers to develop various smart devices and to interface those devices to different networks, systems, and instruments. This standard consists of seven parts and each of them has different aspects of the interface standard [13, 14, 15, 16, 17, 18]. In this project the standard was used to implement the STIM with semiconductor gas sensors. The glove is designed to recognize the movement of the fingers and the palm, as well as the motion of the hand over a certain period of time. The data received from the glove is transmitted to a micro controller which will then process the inputs and send to the main processing unit. This processing unit identifies the received data and processes them so that it outputs the relevant voice signal corresponding to the hand gesture. Figure 2 shows that the signal flow of the gesture recognition system. Sensor Array Micro- Controller Tanslator and mapping system Finger Movement Data Hand Movement Data Figure 2. Process in Gesture Recognition System. B. Voice Processing System Process the data and create the outputpacket Process the data packets Translate Gestures into voice Voice processing system is the next component of this product. This will allow to convert human voice signal to an animated avatar which illustrates the hand gestures on the external display device. Human voice will be captured using a microphone and feed for processing using Sphinx Framework. Sphinx will recognize the utterance and result the corresponding word phrase. Then those will be manipulated in to a format which is compatible with the sign language. It outputs the recognized words and those will be filtered based on the available sign language representations. This filtered output is then feeded to Jmonkey game engine for avatar animation. This animation will be generated dynamically at the Jmonkey engine and displayed on the external display. Sphinx Framework provides a rich API to develop custom voice recognition applications and it better customer support. There are a wide range of implementations based on sphinx which are using in live environments. JMonkey Engine is a high performance scene graph based graphics API. JMonkey was built to fulfill the lack of full featured graphics engines written in Java [19]. VI. IMPLEMENTATION Implementation of the proposed solution according to each module varied according to different technologies used. A. Gesture Recognition System Glove is implemented using the flex sensors which will respond to the bending of the fingers [20, 21]. These flex sensors will change the resistance according to the angle of the finger and corresponding analog output voltages are fed to the microcontroller based system. Using the built in ADC unit 10 bit digital data is produced giving high accuracy to the finger movement capturing [22]. To capture the motion of the hand a Micro Electro Mechanical Systems (MEMS) sensor MPU 6050 which will sense the acceleration in X, Y, Z directions is used. Also the sensor consists of a tri-axis gyroscope which will provide gyroscopic data [23, 24, 25]. Finger position and movements of the hand are received by the micro controller and then sent as data packets to the main computer program which will classify the data and identify the patterns for predicting the meaning of the hand gestures. Here first the data received from the micro controller should be processed as time-series data. Each data set sent by the micro controller can be identified as a feature vector. Feature vector can be described as below Figure 3. pinky ring middle Index Thumb Thumb-index Index-middle ax ay az Figure 3. Feature Vector of the data set. First 5 readings are for five fingers, then the next two are for the readings from the sensors between the mentioned fingers, next the acceleration data and gyroscopic data respectively. As the readings from the flex sensors are stable they can be directly used in the feature vector, but for the accelerometer and gyroscopic data we have to use certain filters before identifying the patterns. Kalman filter and complementary filters are used in filtering the acceleration and gyroscopic data to get a noise reduced smooth curve for the required gx gy gz

4 gesture[26, 27, 28]. Kalman filter is mainly used in the signal processing area for reducing the noise of a signal. As the accelerometers are highly subjective to noise there s a lot of jitter that can occur during the motions which will provide unnecessary data for processing. For recognizing the alphabet using only the finger movements, a classifier is used. Here K-Nearest Neighbor (KNN) classifier can be easily implemented without any issues for this purpose. Once the system is trained with the training data, it can then be used to identify the gestures using the KNN classifier. But the issue is there when recognizing the characters like J, Z, P, and Q characters as they require motion and orientation of the hand. To capture this motion sensor readings should be used. Motion sensor readings and the finger position readings together can be used in the KNN classifier as time series collection of data to identify the characters signed by the hand. Once the characters are identified, then the identification of the words is required. Identifying the words should be done will be manipulated according to a format which is suitable for the sign language. Recognized words will be mapped to the relevant hand gesture using the avatar. This avatar is created and animated using Blender. Those animation parameters and the modeled character are exported as xml format which can be used for dynamic animation on the java platform. On the java application side we used JMonkey game engine for dynamic animations. It will map the corresponding animation to each word recognized by the trained system and animations displayed in external display. Fig. 5. Animated avatar to display sign language using the finger readings and motion sensor readings together. Once the words and characters are identified they should be processed in order to generate the sentences out of those words. For that we can feed this data into a Natural Language Processing tool and receive the output as a set of words. As natural language processing tool we used free test to speech java framework. Figure 4 shows an image of the developed smart glove. B. Voice Processing System Figure 4. Sensor glove This component mainly consists of voice to text and text to sign language mapping. Voice to text is done based on the sphinx framework. Sphinx receives the voice signals captured using a microphone. As sphinx is highly customizable, it can be trained for a custom language with a different accent. We collected a dataset which consist of more than one hour recording. We are focusing on a particular environment where the user of working on the road, we restricted the vocabulary to a set of words. The dataset collected included selected words from few people. Sphinx has its own configuration when we are training the system to recognize a custom language. At the same time we developed a custom language model which is suitable for the vocabulary we are going to train on. With all these requirements completed we trained the system several times until we get an accurate result using a testing dataset. Those words or sentences recognized VII. EVALUATION We tested our system in lab environment. Voice to text subsystem was trained using the sphinx framework. Training data set was collected for one and half hour of recorded voice wave 16 khz, 16 bit mono audio format and a testing dataset of 15 mins with same specs. Whole system was trained on a Linux operating system. Trained system was tested using the collected 15mins of dataset where is resulted a word recognizing error rate of 12%. Data glove was tested with the help of 10 users and following results were achieved. KNN classifier is used as the main classification method and the classifier is trained for all 26 characters with 200 training samples for each character. Classifier configurations consider 5 neighbors to make a decision and cross validation is used to improve the reliability. Each user trained each character 20 times by signing the symbol. Among the characters J, Z, P, Q required motion and orientation of the hand. Also the characters P and K, H and U, Q and G had resulted in certain ambiguous outputs. TABLE III ERROR RATE FOR EACH SYMBOL Symbol Error Rate A, B, D, E, F, G, H, I, L, R, S, 2% T, V, W, X, Y, Z C 3% J 15% K 5% M 4% N 4% O 2% P 10% Q 10% U 5% VIII. CONCLUSION There exist many communication barriers between deaf people and the society. Therefore we developed a portable tool

5 where it integrates many features for deaf and hearing impaired people to make their communication with the normal society. Proposed system has two main sub systems, Gesture (sign language) recognition system and Voice processing system. With the use of gesture recognition system we are enabling the person to talk by processing the hand gestures (Sinhala sign language) using a sensor glove and speak out the content. With the voice processing system we are enabling the hearing impaired person to see or feel the voices around him by converting voice to Sinhala sign language and display it using animated Avatar. Sign language is captured by using a specially designed sensor glove and recognized hand signals are converted in to voice in real time. Sensor glove is a complex embedded system with five flex sensors, an accelerometer and a gyroscope. Captured voice signals are converted in to sign language and displayed relevant sign language animations in real time on an external display using an animated avatar. Real time animation generation is done by using blender and JMonkey game engine. To improve the accuracy of the voice processing system the system it needs to use some more relevant dataset and the configuration parameters have to be optimized accordingly. The Table II shows a comparison between our approach and other approaches considering the similar features and distinct features of our proposed solution. TABLE II. COMPARISON BETWEEN OUR APPROACH AND OTHER APPROACHES. Feature Other approaches Portable /No Recognize words in sign Support for Sinhala Sign Language Sign-to-voice Voice-to-text /No No /No /No Our Approach Text-to-sign /No Voice-to-sign /No IX. FUTURE WORK In order to make this system more value added and user friendly we will add a feature to notify the direction of sound. User will be notify about direction and type of the sound. Also we will plan to test this system using deaf and hearing impaired students in Rathmalana Deaf School of Sri Lanla. REFERENCES [1] Human Rights Act, Fact Sheet No.2 (Rev.1), The International Bill of Human Rights, United Nations High Commission for Human Rights, Palais, Wilson 52, CH-1201, Geneva, Switzerland, [2] World Federation of the Deaf, Human Rights for Deaf People, World Federation of the Deaf (WFD), Light House, Likantie 4, Helsinki, Finland, last update [3] Organizational Publication, The Human Rights Act, Action on Hearing Loss, 19-23, Featherstone Street, London, [4] WHO, Deafness and hearing loss, World Health Organization, Avenue Appia 20, 121, Geneva 27, Switzerland, February [5] WHO, Facts about deafness, World Health Organization, Avenue Appia 20, 121, Geneva 27, Switzerland, September [6] Wendy Sandler and Diane Lillo-Martin, Natural Sign Languages, in Handbook of Linguistics, M. Aronoff & J. Rees- Miller (eds.), 2001, pp [7] Ranjay Krishna, Seonwoo Lee, Si Ping Wang and Jonathan Lang Sign Language Translator - The Sound of Signing" Internet: Projects/s2012/sl787_rak248_sw525_fl229/sl787_rak248_sw525 _fl229/ Last Accessed - [Sep ]. [8] Shoaib Ahmed "MAGIC GLOVES - Hand Gesture Recognition and Voice Conversion System for Differentially Able Dumb People", presented at Tech Expo-The Global Summit, London, United Kingdom, Dec [9] Ajinkya Raut, Vineeta Singh, Vikrant Rajput, Ruchika Mahale. "HAND SIGN INTERPRETER" The International Journal of Engineering And Science (IJES), Vol. 1, pp , [10] icommunicater Information Sheet [11] Xilin Chen, Hanjing Li, Tim Pan, Stewart Tansley, Ming Zhou. "Kinect Sign Language Translator expands communication possibilities" Microsoft Research. [12] Jamal Haydar, Bayan Dalal, Shahed Hussainy, Lina El Khansa and Walid Fahs, "ASL Fingerspelling Translator Glove," IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 6, No 1, Nov [13] IEEE, IEEE Std , IEEE Standard for a Smart Transducer Interface for Sensors and Actuators Transducer to Microprocessor Communication Protocols and Transducer Electronic Data Sheet (TEDS) Formats, The Institute of Electrical and Electronics Engineers, Inc., New York, USA, [14] Kularatna N., Sensors, Modern Component Families and Circuit Block Design, Butterworth- Heinemann, Woburn, USA, 2000, pp [15] B.H. Sudantha, N. Kularathna, "An environmental air pollution monitoring system based on the IEEE 1451 standard for low cost requirements". IEEE Sensors Journal, Vol.8, No.4 April [16] Richard L. Fischer, Jeff Burch, the PICmicro MCU as an IEEE Compatible Smart Transducer Interface Module (STIM), Application Note. AN214, Microchip Technology Inc., USA, [17] Kang B. Lee, Richard D. Schneeman, A Standardized Approach for Transducer Interfacing: Implementing IEEE-P1451 Smart Transducer Interface Draft Standards, Proceedings of Sensors Expo, Philadelphia, USA, October 1996, pp [18] Analog Devices, The ADuC812 MicroConverter as an IEEE Compatible Smart Transducer Interface, MicroConverter Technical Note uc003 The ADuC812 as an IEEE STIM, Analog Devices, Inc., Norwood, USA, Sept

6 [19] Deepak Modi, Ajay Jaiswal, Mayank Gupta, Mayank Mandloi, Mohit Mehta, Dhairya Mukheja. 3D Game Engines as a New Reality International Journal of Research Studies in Science, Engineering and Technology [IJRSSET], Vol. 1, Issue 4, July 2014, PP [20] spectrasymbol, "Flex Sensor" 4.5 datasheet. [21] Giovanni Saggio, "Electrical Resistance Profiling of Bend Sensors adopted to Measure Spatial Arrangement of the Human Body," Dept. of Electronic Engineering, University of Tor Vergata, Rome, Italy. [22] Atmel Coperation, "datasheet - Atmel ATmega640/V- 1280/V-1281/V-2560/V-2561/", Atmel Coperation, 1600 Technology Drive, San Jose, California, 95110, USA, [23] InvenSense Inc, "MPU-6000 and MPU-6050 Product Specification" MPU-6000/MPU-6050 data sheet, May [24] Sam Naghshineh, Golafsoun Ameri, Mazdak Zereshki, Dr. S. Krishnan, and Dr. M. Abdoli-Eramaki, "Human Motion capture using Tri-Axial accelerometers" [25] Shay Reuveny and Maayan Zadik, "3D Motion tracking with Gyroscope and Accelerometer," Final Project [26] Piyush Kumar, Jyoti Verma and Shitala Prasad, "Hand Data Glove: A Wearable Real-Time Device for HumanComputer Interaction," International Journal of Advanced Science and Technology, Vol. 43, June, [27] Greg Welch and Gary Bishop, "An Introduction to the Kalman Filter," TR , Department of Computer Science, University of North Carolina, Chapel Hill, NC , July [28] Paul C. Glasser, "An Introduction to the Use of Complementary Filters for Fusion of Sensor Data" Research Paper.

AVR Based Gesture Vocalizer Using Speech Synthesizer IC

AVR Based Gesture Vocalizer Using Speech Synthesizer IC AVR Based Gesture Vocalizer Using Speech Synthesizer IC Mr.M.V.N.R.P.kumar 1, Mr.Ashutosh Kumar 2, Ms. S.B.Arawandekar 3, Mr.A. A. Bhosale 4, Mr. R. L. Bhosale 5 Dept. Of E&TC, L.N.B.C.I.E.T. Raigaon,

More information

Sign Language Interpretation Using Pseudo Glove

Sign Language Interpretation Using Pseudo Glove Sign Language Interpretation Using Pseudo Glove Mukul Singh Kushwah, Manish Sharma, Kunal Jain and Anish Chopra Abstract The research work presented in this paper explores the ways in which, people who

More information

Embedded Based Hand Talk Assisting System for Dumb Peoples on Android Platform

Embedded Based Hand Talk Assisting System for Dumb Peoples on Android Platform Embedded Based Hand Talk Assisting System for Dumb Peoples on Android Platform R. Balakrishnan 1, Santosh BK 2, Rahul H 2, Shivkumar 2, Sunil Anthony 2 Assistant Professor, Department of Electronics and

More information

Glove for Gesture Recognition using Flex Sensor

Glove for Gesture Recognition using Flex Sensor Glove for Gesture Recognition using Flex Sensor Mandar Tawde 1, Hariom Singh 2, Shoeb Shaikh 3 1,2,3 Computer Engineering, Universal College of Engineering, Kaman Survey Number 146, Chinchoti Anjur Phata

More information

A Review on Gesture Vocalizer

A Review on Gesture Vocalizer A Review on Gesture Vocalizer Deena Nath 1, Jitendra Kurmi 2, Deveki Nandan Shukla 3 1, 2, 3 Department of Computer Science, Babasaheb Bhimrao Ambedkar University Lucknow Abstract: Gesture Vocalizer is

More information

Communication Interface for Mute and Hearing Impaired People

Communication Interface for Mute and Hearing Impaired People Communication Interface for Mute and Hearing Impaired People *GarimaRao,*LakshNarang,*Abhishek Solanki,*Kapil Singh, Mrs.*Karamjit Kaur, Mr.*Neeraj Gupta. *Amity University Haryana Abstract - Sign language

More information

Development of an Electronic Glove with Voice Output for Finger Posture Recognition

Development of an Electronic Glove with Voice Output for Finger Posture Recognition Development of an Electronic Glove with Voice Output for Finger Posture Recognition F. Wong*, E. H. Loh, P. Y. Lim, R. R. Porle, R. Chin, K. Teo and K. A. Mohamad Faculty of Engineering, Universiti Malaysia

More information

Real Time Sign Language Processing System

Real Time Sign Language Processing System Real Time Sign Language Processing System Dibyabiva Seth (&), Anindita Ghosh, Ariruna Dasgupta, and Asoke Nath Department of Computer Science, St. Xavier s College (Autonomous), Kolkata, India meetdseth@gmail.com,

More information

Smart Gloves for Hand Gesture Recognition and Translation into Text and Audio

Smart Gloves for Hand Gesture Recognition and Translation into Text and Audio Smart Gloves for Hand Gesture Recognition and Translation into Text and Audio Anshula Kumari 1, Rutuja Benke 1, Yasheseve Bhat 1, Amina Qazi 2 1Project Student, Department of Electronics and Telecommunication,

More information

Speech to Text Wireless Converter

Speech to Text Wireless Converter Speech to Text Wireless Converter Kailas Puri 1, Vivek Ajage 2, Satyam Mali 3, Akhil Wasnik 4, Amey Naik 5 And Guided by Dr. Prof. M. S. Panse 6 1,2,3,4,5,6 Department of Electrical Engineering, Veermata

More information

Detection and Recognition of Sign Language Protocol using Motion Sensing Device

Detection and Recognition of Sign Language Protocol using Motion Sensing Device Detection and Recognition of Sign Language Protocol using Motion Sensing Device Rita Tse ritatse@ipm.edu.mo AoXuan Li P130851@ipm.edu.mo Zachary Chui MPI-QMUL Information Systems Research Centre zacharychui@gmail.com

More information

International Journal of Advance Engineering and Research Development. Gesture Glove for American Sign Language Representation

International Journal of Advance Engineering and Research Development. Gesture Glove for American Sign Language Representation Scientific Journal of Impact Factor (SJIF): 4.14 International Journal of Advance Engineering and Research Development Volume 3, Issue 3, March -2016 Gesture Glove for American Sign Language Representation

More information

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE) Vol 5, Issue 3, March 2018 Gesture Glove

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE) Vol 5, Issue 3, March 2018 Gesture Glove Gesture Glove [1] Kanere Pranali, [2] T.Sai Milind, [3] Patil Shweta, [4] Korol Dhanda, [5] Waqar Ahmad, [6] Rakhi Kalantri [1] Student, [2] Student, [3] Student, [4] Student, [5] Student, [6] Assistant

More information

Haptic Based Sign Language Interpreter

Haptic Based Sign Language Interpreter Haptic Based Sign Language Interpreter Swayam Bhosale 1, Harsh Kalla 2, Kashish Kitawat 3, Megha Gupta 4 Department of Electronics and Telecommunication,, Maharashtra, India Abstract: There is currently

More information

Real-time Communication System for the Deaf and Dumb

Real-time Communication System for the Deaf and Dumb Real-time Communication System for the Deaf and Dumb Kedar Potdar 1, Gauri Nagavkar 2 U.G. Student, Department of Computer Engineering, Watumull Institute of Electronics Engineering and Computer Technology,

More information

A Review on Hand Gesture Recognition for Deaf and Dumb People Using GSM Module

A Review on Hand Gesture Recognition for Deaf and Dumb People Using GSM Module A Review on Hand Gesture Recognition for Deaf and Dumb People Using GSM Module Shital P.Dawane 1, Prof. Hajiali G.Sayyed 2 PG Student, Dept. of ENTC, SSIEMS, Parbhani, Maharashtra, India 1 Assistant Professor,

More information

Gesture Recognition using Marathi/Hindi Alphabet

Gesture Recognition using Marathi/Hindi Alphabet Gesture Recognition using Marathi/Hindi Alphabet Rahul Dobale ¹, Rakshit Fulzele², Shruti Girolla 3, Seoutaj Singh 4 Student, Computer Engineering, D.Y. Patil School of Engineering, Pune, India 1 Student,

More information

International Journal of Modern Trends in Engineering and Research e-issn No.: , Date: 2-4 July, 2015

International Journal of Modern Trends in Engineering and Research   e-issn No.: , Date: 2-4 July, 2015 International Journal of Modern Trends in Engineering and Research www.ijmter.com e-issn No.:2349-9745, Date: 2-4 July, 2015 Data Glove For Speech Impaired Using XBEE AND GSM Randhawane Prajakta Dilip

More information

Smart Speaking Gloves for Speechless

Smart Speaking Gloves for Speechless Smart Speaking Gloves for Speechless Bachkar Y. R. 1, Gupta A.R. 2 & Pathan W.A. 3 1,2,3 ( E&TC Dept., SIER Nasik, SPP Univ. Pune, India) Abstract : In our day to day life, we observe that the communication

More information

Sign Language to English (Slate8)

Sign Language to English (Slate8) Sign Language to English (Slate8) App Development Nathan Kebe El Faculty Advisor: Dr. Mohamad Chouikha 2 nd EECS Day April 20, 2018 Electrical Engineering and Computer Science (EECS) Howard University

More information

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1 1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present

More information

Meri Awaaz Smart Glove Learning Assistant for Mute Students and teachers

Meri Awaaz Smart Glove Learning Assistant for Mute Students and teachers Meri Awaaz Smart Glove Learning Assistant for Mute Students and teachers Aditya C *1, Siddharth T 1, Karan K 1 and Priya G 2 School of Computer Science and Engineering, VIT University, Vellore, India 1

More information

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING 134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty

More information

Flex Sensor Based Hand Glove for Deaf and Mute People

Flex Sensor Based Hand Glove for Deaf and Mute People International Journal of Computer Networks and Communications Security VOL. 5, NO. 2, FEBRUARY 2017, 38 48 Available online at: www.ijcncs.org E-ISSN 2308-9830 (Online) / ISSN 2410-0595 (Print) Flex Sensor

More information

A Wearable Hand Gloves Gesture Detection based on Flex Sensors for disabled People

A Wearable Hand Gloves Gesture Detection based on Flex Sensors for disabled People A Wearable Hand Gloves Gesture Detection based on Flex Sensors for disabled People Kunal Purohit 1, Prof. Kailash Patidar 2, Mr. Rishi Singh Kushwah 3 1 M.Tech Scholar, 2 Head, Computer Science & Engineering,

More information

ISSN: [Jain * et al., 7(4): April, 2018] Impact Factor: 5.164

ISSN: [Jain * et al., 7(4): April, 2018] Impact Factor: 5.164 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY IMAGE PROCESSING BASED SPEAKING SYSTEM FOR MUTE PEOPLE USING HAND GESTURES Abhishek Jain *1, Lakshita Jain 2, Ishaan Sharma 3

More information

Interact-AS. Use handwriting, typing and/or speech input. The most recently spoken phrase is shown in the top box

Interact-AS. Use handwriting, typing and/or speech input. The most recently spoken phrase is shown in the top box Interact-AS One of the Many Communications Products from Auditory Sciences Use handwriting, typing and/or speech input The most recently spoken phrase is shown in the top box Use the Control Box to Turn

More information

Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor

Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor Yanhua Sun *, Noriaki Kuwahara**, Kazunari Morimoto *** * oo_alison@hotmail.com ** noriaki.kuwahara@gmail.com ***morix119@gmail.com

More information

International Journal of Advances in Engineering & Technology, Sept., IJAET ISSN: Tambaram, Chennai

International Journal of Advances in Engineering & Technology, Sept., IJAET ISSN: Tambaram, Chennai CHALLENGER S MEDIA M.Nalini 1, G.Jayasudha 2 and Nandhini.J.Rao 3 1 Assistant Professor, Department of Electronics and Instrumentation, Sri Sairam Engineering College, West Tambaram, Chennai-600044. 2

More information

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Matt Huenerfauth Raja Kushalnagar Rochester Institute of Technology DHH Auditory Issues Links Accents/Intonation Listening

More information

[Ashwini*, 5(4): April, 2016] ISSN: (I2OR), Publication Impact Factor: 3.785

[Ashwini*, 5(4): April, 2016] ISSN: (I2OR), Publication Impact Factor: 3.785 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY GESTURE VOCALIZER FOR DEAF AND DUMB Kshirasagar Snehal P.*, Shaikh Mohammad Hussain, Malge Swati S., Gholap Shraddha S., Mr. Swapnil

More information

Recognition of sign language gestures using neural networks

Recognition of sign language gestures using neural networks Recognition of sign language gestures using neural s Peter Vamplew Department of Computer Science, University of Tasmania GPO Box 252C, Hobart, Tasmania 7001, Australia vamplew@cs.utas.edu.au ABSTRACT

More information

Detection of Finger Motion using Flex Sensor for Assisting Speech Impaired

Detection of Finger Motion using Flex Sensor for Assisting Speech Impaired Detection of Finger Motion using Flex Sensor for Assisting Speech Impaired Heena Joshi 1, Shweta Bhati 2, Komal Sharma 3, Vandana Matai 4 Assistant Professor, Department of Electronics and Communication

More information

Member 1 Member 2 Member 3 Member 4 Full Name Krithee Sirisith Pichai Sodsai Thanasunn

Member 1 Member 2 Member 3 Member 4 Full Name Krithee Sirisith Pichai Sodsai Thanasunn Microsoft Imagine Cup 2010 Thailand Software Design Round 1 Project Proposal Template PROJECT PROPOSAL DUE: 31 Jan 2010 To Submit to proposal: Register at www.imaginecup.com; select to compete in Software

More information

COMPUTER PLAY IN EDUCATIONAL THERAPY FOR CHILDREN WITH STUTTERING PROBLEM: HARDWARE SETUP AND INTERVENTION

COMPUTER PLAY IN EDUCATIONAL THERAPY FOR CHILDREN WITH STUTTERING PROBLEM: HARDWARE SETUP AND INTERVENTION 034 - Proceeding of the Global Summit on Education (GSE2013) COMPUTER PLAY IN EDUCATIONAL THERAPY FOR CHILDREN WITH STUTTERING PROBLEM: HARDWARE SETUP AND INTERVENTION ABSTRACT Nur Azah Hamzaid, Ammar

More information

Recognition of Voice and Text Using Hand Gesture

Recognition of Voice and Text Using Hand Gesture Recognition of Voice and Text Using Hand Gesture Ms. P.V.Gawande 1, Ruchira A. Chavan 2, Kalyani S. Kanade 3,Divya D. Urkande 4, Tejaswini S. Jumale 5, Karishma D. Kamale 6 Asst. Professor, Electronics

More information

[Chafle, 2(5): May, 2015] ISSN:

[Chafle, 2(5): May, 2015] ISSN: HAND TALK GLOVES FOR GESTURE RECOGNIZING Kalyani U. Chafle*, Bhagyashree Sharma, Vaishali Patil, Ravi Shriwas *Research scholar, Department of Electronics and Telecommunication, Jawaharlal Darda institute

More information

The Sign2 Project Digital Translation of American Sign- Language to Audio and Text

The Sign2 Project Digital Translation of American Sign- Language to Audio and Text The Sign2 Project Digital Translation of American Sign- Language to Audio and Text Fitzroy Lawrence, Jr. Advisor: Dr. Chance Glenn, The Center for Advanced Technology Development Rochester Institute of

More information

Sign Language Coach. Pupul Mayank Department of Telecommunication Engineering BMS College of Engg, Bangalore, Karnataka, India

Sign Language Coach. Pupul Mayank Department of Telecommunication Engineering BMS College of Engg, Bangalore, Karnataka, India Sign Language Coach M.Vasantha lakshmi Assistant Professor, Department of Telecommunication Engineering Pupul Mayank Department of Telecommunication Engineering Nadir Ahmed Department of Telecommunication

More information

An Ingenious accelerator card System for the Visually Lessen

An Ingenious accelerator card System for the Visually Lessen An Ingenious accelerator card System for the Visually Lessen T. Naveena 1, S. Revathi 2, Assistant Professor 1,2, Department of Computer Science and Engineering 1,2, M. Arun Karthick 3, N. Mohammed Yaseen

More information

Assistive Technology for Regular Curriculum for Hearing Impaired

Assistive Technology for Regular Curriculum for Hearing Impaired Assistive Technology for Regular Curriculum for Hearing Impaired Assistive Listening Devices Assistive listening devices can be utilized by individuals or large groups of people and can typically be accessed

More information

Assistant Professor, PG and Research Department of Computer Applications, Sacred Heart College (Autonomous), Tirupattur, Vellore, Tamil Nadu, India

Assistant Professor, PG and Research Department of Computer Applications, Sacred Heart College (Autonomous), Tirupattur, Vellore, Tamil Nadu, India International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2018 IJSRCSEIT Volume 3 Issue 7 ISSN : 2456-3307 Collaborative Learning Environment Tool In E-Learning

More information

Microphone Input LED Display T-shirt

Microphone Input LED Display T-shirt Microphone Input LED Display T-shirt Team 50 John Ryan Hamilton and Anthony Dust ECE 445 Project Proposal Spring 2017 TA: Yuchen He 1 Introduction 1.2 Objective According to the World Health Organization,

More information

Building an Application for Learning the Finger Alphabet of Swiss German Sign Language through Use of the Kinect

Building an Application for Learning the Finger Alphabet of Swiss German Sign Language through Use of the Kinect Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 2014 Building an Application for Learning the Finger Alphabet of Swiss German

More information

enterface 13 Kinect-Sign João Manuel Ferreira Gameiro Project Proposal for enterface 13

enterface 13 Kinect-Sign João Manuel Ferreira Gameiro Project Proposal for enterface 13 enterface 13 João Manuel Ferreira Gameiro Kinect-Sign Project Proposal for enterface 13 February, 2013 Abstract This project main goal is to assist in the communication between deaf and non-deaf people.

More information

GLOVES BASED HAND GESTURE RECOGNITION USING INDIAN SIGN LANGUAGE

GLOVES BASED HAND GESTURE RECOGNITION USING INDIAN SIGN LANGUAGE GLOVES BASED HAND GESTURE RECOGNITION USING INDIAN SIGN LANGUAGE V. K. Bairagi 1 International Journal of Latest Trends in Engineering and Technology Vol.(8)Issue(4-1), pp.131-137 DOI: http://dx.doi.org/10.21172/1.841.23

More information

Avaya IP Office R9.1 Avaya one-x Portal Call Assistant Voluntary Product Accessibility Template (VPAT)

Avaya IP Office R9.1 Avaya one-x Portal Call Assistant Voluntary Product Accessibility Template (VPAT) Avaya IP Office R9.1 Avaya one-x Portal Call Assistant Voluntary Product Accessibility Template (VPAT) Avaya IP Office Avaya one-x Portal Call Assistant is an application residing on the user s PC that

More information

Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction

Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction Aswathy M 1, Heera Narayanan 2, Surya Rajan 3, Uthara P M 4, Jeena Jacob 5 UG Students, Dept. of ECE, MBITS, Nellimattom,

More information

A Communication tool, Mobile Application Arabic & American Sign Languages (ARSL) Sign Language (ASL) as part of Teaching and Learning

A Communication tool, Mobile Application Arabic & American Sign Languages (ARSL) Sign Language (ASL) as part of Teaching and Learning A Communication tool, Mobile Application Arabic & American Sign Languages (ARSL) Sign Language (ASL) as part of Teaching and Learning Fatima Al Dhaen Ahlia University Information Technology Dep. P.O. Box

More information

New Approaches to Accessibility. Richard Ladner University of Washington

New Approaches to Accessibility. Richard Ladner University of Washington New Approaches to Accessibility Richard Ladner University of Washington 1 What We ll Do Today Disabilities Technology Trends MobileAccessibility Project Other Mobile Projects 2 Basic Data 650 million people

More information

icommunicator, Leading Speech-to-Text-To-Sign Language Software System, Announces Version 5.0

icommunicator, Leading Speech-to-Text-To-Sign Language Software System, Announces Version 5.0 For Immediate Release: William G. Daddi Daddi Brand Communications (P) 212-404-6619 (M) 917-620-3717 Bill@daddibrand.com icommunicator, Leading Speech-to-Text-To-Sign Language Software System, Announces

More information

Hand Gesture Recognition In Real Time Using IR Sensor

Hand Gesture Recognition In Real Time Using IR Sensor Volume 114 No. 7 2017, 111-121 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Hand Gesture Recognition In Real Time Using IR Sensor Rohith H R 1,

More information

MobileAccessibility. Richard Ladner University of Washington

MobileAccessibility. Richard Ladner University of Washington MobileAccessibility Richard Ladner University of Washington 1 What We ll Do Tonight Disabilities Technology Trends MobileAccessibility Project Other Mobile Projects 2 Basic Data 650 million people world-wide

More information

Computer Applications: An International Journal (CAIJ), Vol.3, No.1, February Mohammad Taye, Mohammad Abu Shanab, Moyad Rayyan and Husam Younis

Computer Applications: An International Journal (CAIJ), Vol.3, No.1, February Mohammad Taye, Mohammad Abu Shanab, Moyad Rayyan and Husam Younis ANYONE CAN TALK TOOL Mohammad Taye, Mohammad Abu Shanab, Moyad Rayyan and Husam Younis Software Engineering Department Information Technology Faculty Philadelphia University ABSTRACT People who have problems

More information

Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information:

Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information: Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information: accessibility@cisco.com Summary Table - Voluntary Product Accessibility Template Criteria Supporting Features Remarks

More information

Hand Gesture Recognition System for Deaf and Dumb Persons

Hand Gesture Recognition System for Deaf and Dumb Persons Hand Gesture Recognition System for Deaf and Dumb Persons Mr.R.Jagadish 1, R.Gayathri 2, R.Mohanapriya 3, R.Kalaivani 4 and S.Keerthana 5 1 Associate Professor, Department of Electronics and Communication

More information

Translation of Unintelligible Speech of User into Synthetic Speech Using Augmentative and Alternative Communication

Translation of Unintelligible Speech of User into Synthetic Speech Using Augmentative and Alternative Communication Translation of Unintelligible Speech of User into Synthetic Speech Using Augmentative and Alternative Communication S.Jasmine Vanitha 1, K. Manimozhi 2 P.G scholar, Department of EEE, V.R.S College of

More information

Multimedia courses generator for hearing impaired

Multimedia courses generator for hearing impaired Multimedia courses generator for hearing impaired Oussama El Ghoul and Mohamed Jemni Research Laboratory of Technologies of Information and Communication UTIC Ecole Supérieure des Sciences et Techniques

More information

Available online at ScienceDirect. Procedia Technology 24 (2016 )

Available online at   ScienceDirect. Procedia Technology 24 (2016 ) Available online at www.sciencedirect.com ScienceDirect Procedia Technology 24 (2016 ) 1068 1073 International Conference on Emerging Trends in Engineering, Science and Technology (ICETEST - 2015) Improving

More information

Gesture Vocalizer using IoT

Gesture Vocalizer using IoT Gesture Vocalizer using IoT Deepa Haridas 1, Drishya M 2, Reshma Johnson 3, Rose Simon 4, Sradha Mohan 5, Linu Babu P 6 UG Scholar, Electronics and Communication, IES College of Engineering, Thrissur,

More information

An Approach to Hand Gesture Recognition for Devanagari Sign Language using Image Processing Tool Box

An Approach to Hand Gesture Recognition for Devanagari Sign Language using Image Processing Tool Box An Approach to Hand Gesture Recognition for Devanagari Sign Language using Image Processing Tool Box Prof. Abhijit V. Warhade 1 Prof. Pranali K. Misal 2 Assistant Professor, Dept. of E & C Engineering

More information

IDENTIFICATION OF REAL TIME HAND GESTURE USING SCALE INVARIANT FEATURE TRANSFORM

IDENTIFICATION OF REAL TIME HAND GESTURE USING SCALE INVARIANT FEATURE TRANSFORM Research Article Impact Factor: 0.621 ISSN: 2319507X INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK IDENTIFICATION OF REAL TIME

More information

Summary Table: Voluntary Product Accessibility Template

Summary Table: Voluntary Product Accessibility Template Date: August 16 th, 2011 Name of Product: Cisco Unified Wireless IP Phone 7921G, 7925G, 7925G-EX and 7926G Contact for more information: Conrad Price, cprice@cisco.com Summary Table: Voluntary Product

More information

easy read Your rights under THE accessible InformatioN STandard

easy read Your rights under THE accessible InformatioN STandard easy read Your rights under THE accessible InformatioN STandard Your Rights Under The Accessible Information Standard 2 1 Introduction In July 2015, NHS England published the Accessible Information Standard

More information

Microcontroller and Sensors Based Gesture Vocalizer

Microcontroller and Sensors Based Gesture Vocalizer Microcontroller and Sensors Based Gesture Vocalizer ATA-UR-REHMAN SALMAN AFGHANI MUHAMMAD AKMAL RAHEEL YOUSAF Electrical Engineering Lecturer Rachna College of Engg. and tech. Gujranwala Computer Engineering

More information

A Design of Prototypic Hand Talk Assistive Technology for the Physically Challenged

A Design of Prototypic Hand Talk Assistive Technology for the Physically Challenged A Design of Prototypic Hand Talk Assistive Technology for the Physically Challenged S. Siva Srujana 1, S. Jahnavi 2, K. Jhansi 3 1 Pragati Engineering College, Surampalem, Peddapuram, A.P, India 2 Pragati

More information

DESIGN OF SMART HEARING AID

DESIGN OF SMART HEARING AID International Journal of Electronics and Communication Engineering and Technology (IJECET) Volume 7, Issue 6, November-December 2016, pp. 32 38, Article ID: IJECET_07_06_005 Available online at http://www.iaeme.com/ijecet/issues.asp?jtype=ijecet&vtype=7&itype=6

More information

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures. Date: 26 June 2017 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s CX5100 Unified Conference Station against the criteria

More information

Hand Gestures Recognition System for Deaf, Dumb and Blind People

Hand Gestures Recognition System for Deaf, Dumb and Blind People Hand Gestures Recognition System for Deaf, Dumb and Blind People Channaiah Chandana K 1, Nikhita K 2, Nikitha P 3, Bhavani N K 4, Sudeep J 5 B.E. Student, Dept. of Information Science & Engineering, NIE-IT,

More information

Voluntary Product Accessibility Template (VPAT)

Voluntary Product Accessibility Template (VPAT) Voluntary Product Accessibility Template (VPAT) Date: January 25 th, 2016 Name of Product: Mitel 6730i, 6731i, 6735i, 6737i, 6739i, 6753i, 6755i, 6757i, 6863i, 6865i, 6867i, 6869i, 6873i Contact for more

More information

A Smart Texting System For Android Mobile Users

A Smart Texting System For Android Mobile Users A Smart Texting System For Android Mobile Users Pawan D. Mishra Harshwardhan N. Deshpande Navneet A. Agrawal Final year I.T Final year I.T J.D.I.E.T Yavatmal. J.D.I.E.T Yavatmal. Final year I.T J.D.I.E.T

More information

Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics

Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics Human Journals Research Article October 2017 Vol.:7, Issue:4 All rights are reserved by Newman Lau Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics Keywords: hand

More information

Sound Interfaces Engineering Interaction Technologies. Prof. Stefanie Mueller HCI Engineering Group

Sound Interfaces Engineering Interaction Technologies. Prof. Stefanie Mueller HCI Engineering Group Sound Interfaces 6.810 Engineering Interaction Technologies Prof. Stefanie Mueller HCI Engineering Group what is sound? if a tree falls in the forest and nobody is there does it make sound?

More information

Using Deep Convolutional Networks for Gesture Recognition in American Sign Language

Using Deep Convolutional Networks for Gesture Recognition in American Sign Language Using Deep Convolutional Networks for Gesture Recognition in American Sign Language Abstract In the realm of multimodal communication, sign language is, and continues to be, one of the most understudied

More information

PC BASED AUDIOMETER GENERATING AUDIOGRAM TO ASSESS ACOUSTIC THRESHOLD

PC BASED AUDIOMETER GENERATING AUDIOGRAM TO ASSESS ACOUSTIC THRESHOLD Volume 119 No. 12 2018, 13939-13944 ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu PC BASED AUDIOMETER GENERATING AUDIOGRAM TO ASSESS ACOUSTIC THRESHOLD Mahalakshmi.A, Mohanavalli.M,

More information

I. Language and Communication Needs

I. Language and Communication Needs Child s Name Date Additional local program information The primary purpose of the Early Intervention Communication Plan is to promote discussion among all members of the Individualized Family Service Plan

More information

ISSN: [ ] [Vol-3, Issue-3 March- 2017]

ISSN: [ ] [Vol-3, Issue-3 March- 2017] A Smart Werable Device for Vocally Challenged People Using IoT Platform Dr. S. Jothi Muneeswari 1, K.Karthik 2, S. Karthick 3, R. Lokesh 4 1 ME, PhD, Professor Professor, Department of ECE, DMI College

More information

Fujitsu LifeBook T Series TabletPC Voluntary Product Accessibility Template

Fujitsu LifeBook T Series TabletPC Voluntary Product Accessibility Template Fujitsu LifeBook T Series TabletPC Voluntary Product Accessibility Template 1194.21 Software Applications and Operating Systems* (a) When software is designed to run on a system that This product family

More information

Finger spelling recognition using distinctive features of hand shape

Finger spelling recognition using distinctive features of hand shape Finger spelling recognition using distinctive features of hand shape Y Tabata 1 and T Kuroda 2 1 Faculty of Medical Science, Kyoto College of Medical Science, 1-3 Imakita Oyama-higashi, Sonobe, Nantan,

More information

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited Advanced Audio Interface for Phonetic Speech Recognition in a High Noise Environment SBIR 99.1 TOPIC AF99-1Q3 PHASE I SUMMARY

More information

Communications Accessibility with Avaya IP Office

Communications Accessibility with Avaya IP Office Accessibility with Avaya IP Office Voluntary Product Accessibility Template (VPAT) 1194.23, Telecommunications Products Avaya IP Office is an all-in-one solution specially designed to meet the communications

More information

Improving Reading of Deaf and Hard of Hearing Children Through Technology Morocco

Improving Reading of Deaf and Hard of Hearing Children Through Technology Morocco Improving Reading of Deaf and Hard of Hearing Children Through Technology Morocco A presentation by: Corinne K. Vinopol, Ph.D. Institute for Disabilities Research and Training, Inc. (IDRT) www.idrt.com

More information

Voluntary Product Accessibility Template (VPAT)

Voluntary Product Accessibility Template (VPAT) Avaya Vantage TM Basic for Avaya Vantage TM Voluntary Product Accessibility Template (VPAT) Avaya Vantage TM Basic is a simple communications application for the Avaya Vantage TM device, offering basic

More information

How can the Church accommodate its deaf or hearing impaired members?

How can the Church accommodate its deaf or hearing impaired members? Is YOUR church doing enough to accommodate persons who are deaf or hearing impaired? Did you know that according to the World Health Organization approximately 15% of the world s adult population is experiencing

More information

Assistive Technologies

Assistive Technologies Revista Informatica Economică nr. 2(46)/2008 135 Assistive Technologies Ion SMEUREANU, Narcisa ISĂILĂ Academy of Economic Studies, Bucharest smeurean@ase.ro, isaila_narcisa@yahoo.com A special place into

More information

Implementation of image processing approach to translation of ASL finger-spelling to digital text

Implementation of image processing approach to translation of ASL finger-spelling to digital text Rochester Institute of Technology RIT Scholar Works Articles 2006 Implementation of image processing approach to translation of ASL finger-spelling to digital text Divya Mandloi Kanthi Sarella Chance Glenn

More information

Hand-Gesture Recognition System For Dumb And Paraplegics

Hand-Gesture Recognition System For Dumb And Paraplegics Hand-Gesture Recognition System For Dumb And Paraplegics B.Yuva Srinivas Raja #1, G.Vimala Kumari *2, K.Susmitha #3, CH.V.N.S Akhil #4, A. Sanhita #5 # Student of Electronics and Communication Department,

More information

Accessibility Standards Mitel MiVoice 8528 and 8568 Digital Business Telephones

Accessibility Standards Mitel MiVoice 8528 and 8568 Digital Business Telephones Accessibility Standards Mitel products are designed with the highest standards of accessibility. Below is a table that outlines how Mitel MiVoice 8528 and 8568 digital business telephones conform to section

More information

DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation Biyi Fang Michigan State University ACM SenSys 17 Nov 6 th, 2017 Biyi Fang (MSU) Jillian Co (MSU) Mi Zhang

More information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

Communication services for deaf and hard of hearing people

Communication services for deaf and hard of hearing people Communication services for deaf and hard of hearing people 2 3 About this leaflet This leaflet is written for deaf, deafened and hard of hearing people who want to find out about communication services.

More information

EBCC Data Analysis Tool (EBCC DAT) Introduction

EBCC Data Analysis Tool (EBCC DAT) Introduction Instructor: Paul Wolfgang Faculty sponsor: Yuan Shi, Ph.D. Andrey Mavrichev CIS 4339 Project in Computer Science May 7, 2009 Research work was completed in collaboration with Michael Tobia, Kevin L. Brown,

More information

easy read Your rights under THE accessible InformatioN STandard

easy read Your rights under THE accessible InformatioN STandard easy read Your rights under THE accessible InformatioN STandard Your Rights Under The Accessible Information Standard 2 Introduction In June 2015 NHS introduced the Accessible Information Standard (AIS)

More information

Making Sure People with Communication Disabilities Get the Message

Making Sure People with Communication Disabilities Get the Message Emergency Planning and Response for People with Disabilities Making Sure People with Communication Disabilities Get the Message A Checklist for Emergency Public Information Officers This document is part

More information

INTERACTIVE GAMES USING KINECT 3D SENSOR TECHNOLOGY FOR AUTISTIC CHILDREN THERAPY By Azrulhizam Shapi i Universiti Kebangsaan Malaysia

INTERACTIVE GAMES USING KINECT 3D SENSOR TECHNOLOGY FOR AUTISTIC CHILDREN THERAPY By Azrulhizam Shapi i Universiti Kebangsaan Malaysia INTERACTIVE GAMES USING KINECT 3D SENSOR TECHNOLOGY FOR AUTISTIC CHILDREN THERAPY By Azrulhizam Shapi i Universiti Kebangsaan Malaysia INTRODUCTION Autism occurs throughout the world regardless of race,

More information

Hearing Words and pictures Mobiles are changing the way people who are deaf communicate *US sign language For people who are deaf or have moderate to profound hearing loss some 278 million worldwide, according

More information

Design and Implementation of Speech Processing in Cochlear Implant

Design and Implementation of Speech Processing in Cochlear Implant Design and Implementation of Speech Processing in Cochlear Implant Pooja T 1, Dr. Priya E 2 P.G. Student (Embedded System Technologies), Department of Electronics and Communication Engineering, Sri Sairam

More information

ITU-T. FG AVA TR Version 1.0 (10/2013) Part 3: Using audiovisual media A taxonomy of participation

ITU-T. FG AVA TR Version 1.0 (10/2013) Part 3: Using audiovisual media A taxonomy of participation International Telecommunication Union ITU-T TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU FG AVA TR Version 1.0 (10/2013) Focus Group on Audiovisual Media Accessibility Technical Report Part 3: Using

More information

Hand Sign to Bangla Speech: A Deep Learning in Vision based system for Recognizing Hand Sign Digits and Generating Bangla Speech

Hand Sign to Bangla Speech: A Deep Learning in Vision based system for Recognizing Hand Sign Digits and Generating Bangla Speech Hand Sign to Bangla Speech: A Deep Learning in Vision based system for Recognizing Hand Sign Digits and Generating Bangla Speech arxiv:1901.05613v1 [cs.cv] 17 Jan 2019 Shahjalal Ahmed, Md. Rafiqul Islam,

More information

Mechanicsburg, Ohio. Policy: Ensuring Effective Communication for Individuals with Disabilities Policy Section: Inmate Supervision and Care

Mechanicsburg, Ohio. Policy: Ensuring Effective Communication for Individuals with Disabilities Policy Section: Inmate Supervision and Care Tri-County Regional Jail Policy & Procedure Policy: Ensuring Effective Communication for Individuals with Disabilities Policy Section: Inmate Supervision and Care Tri-County Regional Jail Mechanicsburg,

More information