Gesture Vocalizer using IoT

Similar documents
Sign Language Interpretation Using Pseudo Glove

AVR Based Gesture Vocalizer Using Speech Synthesizer IC

A Review on Gesture Vocalizer

Haptic Based Sign Language Interpreter

Detection and Recognition of Sign Language Protocol using Motion Sensing Device

Embedded Based Hand Talk Assisting System for Dumb Peoples on Android Platform

Meri Awaaz Smart Glove Learning Assistant for Mute Students and teachers

Smart Gloves for Hand Gesture Recognition and Translation into Text and Audio

Real-time Communication System for the Deaf and Dumb

International Journal of Advance Engineering and Research Development. Gesture Glove for American Sign Language Representation

Speech to Text Wireless Converter

Hand Gesture Recognition System for Deaf and Dumb Persons

Glove for Gesture Recognition using Flex Sensor

Recognition of Voice and Text Using Hand Gesture

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE) Vol 5, Issue 3, March 2018 Gesture Glove

Smart Speaking Gloves for Speechless

Assistant Professor, PG and Research Department of Computer Applications, Sacred Heart College (Autonomous), Tirupattur, Vellore, Tamil Nadu, India

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

Sign Language to English (Slate8)

Hand-Gesture Recognition System For Dumb And Paraplegics

Communication Interface for Mute and Hearing Impaired People

Real Time Sign Language Processing System

ISSN: [Jain * et al., 7(4): April, 2018] Impact Factor: 5.164

A Review on Hand Gesture Recognition for Deaf and Dumb People Using GSM Module

Development of an Electronic Glove with Voice Output for Finger Posture Recognition

A Design of Prototypic Hand Talk Assistive Technology for the Physically Challenged

Available online at ScienceDirect. Procedia Technology 24 (2016 )

Implementation of image processing approach to translation of ASL finger-spelling to digital text

ISSN: [ ] [Vol-3, Issue-3 March- 2017]

[Ashwini*, 5(4): April, 2016] ISSN: (I2OR), Publication Impact Factor: 3.785

Hand Gestures Recognition System for Deaf, Dumb and Blind People

Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction

Computer Applications: An International Journal (CAIJ), Vol.3, No.1, February Mohammad Taye, Mohammad Abu Shanab, Moyad Rayyan and Husam Younis

Android based Monitoring Human Knee Joint Movement Using Wearable Computing

Microphone Input LED Display T-shirt

The Sign2 Project Digital Translation of American Sign- Language to Audio and Text

Gesture Recognition using Marathi/Hindi Alphabet

GLOVES BASED HAND GESTURE RECOGNITION USING INDIAN SIGN LANGUAGE

Flex Sensor Based Hand Glove for Deaf and Mute People

Hand Gesture Recognition In Real Time Using IR Sensor

Smart Glove Based Gesture Vocalizer for Deaf and Dumb

Heart Attack Detection and Heart Rate Monitoring Using IoT

International Journal of Modern Trends in Engineering and Research e-issn No.: , Date: 2-4 July, 2015

Real-time ECG monitoring system for the assessment of rural cardiac patients

An Ingenious accelerator card System for the Visually Lessen

Detection of Finger Motion using Flex Sensor for Assisting Speech Impaired

International Journal of Advances in Engineering & Technology, Sept., IJAET ISSN: Tambaram, Chennai

ANDROID BASED AID FOR THE DEAF

PORTABLE CAMERA-BASED ASSISTIVE TEXT AND PRODUCT LABEL READING FROM HAND-HELD OBJECTS FOR BLINDPERSONS

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

A non-invasive tongue controlled wireless assistive device for quadriplegics.

Thrive Hearing Control Application

Glossary of Inclusion Terminology

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

User Guide V: 3.0, August 2017

Voluntary Product Accessibility Template (VPAT)

Sign Language Coach. Pupul Mayank Department of Telecommunication Engineering BMS College of Engg, Bangalore, Karnataka, India

Cisco Unified Communications: Bringing Innovation to Accessibility

Thrive Hearing Control Application

Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics

Recognition of sign language gestures using neural networks

Gesture Recognition by using Flex Sensor for Patient Monitoring

Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information:

DESIGN OF SMART HEARING AID

Study about Software used in Sign Language Recognition

Translation of Unintelligible Speech of User into Synthetic Speech Using Augmentative and Alternative Communication

Next Generation Systems: Impact on the Deaf Community. Focus Group with NTID Community

Broadcasting live and on demand relayed in Japanese Local Parliaments. Takeshi Usuba (Kaigirokukenkyusho Co.,Ltd)

Artificial Intelligence Based Diabetes Management System

Embedded Systems (1000)

A Wearable Hand Gloves Gesture Detection based on Flex Sensors for disabled People

Assistive Technology for Regular Curriculum for Hearing Impaired

Finger spelling recognition using distinctive features of hand shape

ATLAS. Automatic Translation Into Sign Languages

Re: ENSC 370 Project Gerbil Functional Specifications

Avaya IP Office 10.1 Telecommunication Functions

[Chafle, 2(5): May, 2015] ISSN:

TALKING GLOVES. Mitali Madhusmita Atul Agrawal. Department of Electronics and Communication Engineering

Asl Flash Cards Learn Signs For Action Opposites English Spanish And American Sign Language English And Spanish Edition

COMPUTER PLAY IN EDUCATIONAL THERAPY FOR CHILDREN WITH STUTTERING PROBLEM: HARDWARE SETUP AND INTERVENTION

How can the Church accommodate its deaf or hearing impaired members?

and Technology I. INTRODUCTION METHOD & MATERIAL

3. MANUAL ALPHABET RECOGNITION STSTM

I. Language and Communication Needs

Heart Rate Monitoring System Using Lab View

ADVANCES in NATURAL and APPLIED SCIENCES

Interact-AS. Use handwriting, typing and/or speech input. The most recently spoken phrase is shown in the top box

N RISCE 2K18 ISSN International Journal of Advance Research and Innovation

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)

The Use of Voice Recognition and Speech Command Technology as an Assistive Interface for ICT in Public Spaces.

IoT Platform Based Intelligent Medicinebox

Labview Based Hand Gesture Recognition for Deaf and Dumb People

SFO Marking 25 Years of the Americans with Disabilities Act (ADA) Accessibility for Air Travelers with Disabilities at SFO

USER GUIDE FOR MATLAB BASED HEARING TEST SIMULATOR GUI FOR CLINICAL TESTING

An Approach to Hand Gesture Recognition for Devanagari Sign Language using Image Processing Tool Box

SANAKO Lab 100 STS USER GUIDE

Development of Indian Sign Language Dictionary using Synthetic Animations

Roger TM. for Education Bridging the understanding gap

easy read Your rights under THE accessible InformatioN STandard

Hand Sign Communication System for Hearing Imparied

Transcription:

Gesture Vocalizer using IoT Deepa Haridas 1, Drishya M 2, Reshma Johnson 3, Rose Simon 4, Sradha Mohan 5, Linu Babu P 6 UG Scholar, Electronics and Communication, IES College of Engineering, Thrissur, India 1,2,3,4,5 Asst. Professor, Electronics and Communication, IES College of Engineering, Thrissur, India 6 Abstract: This research paper proposes the implementation of a system which can convert sign language to voice by which mute people can easily communicate through the internet. The system follows American Sign Language standard (ASL). The variations in manual features are sensed by the microcontroller and send to the cloud storage along with real time video, which is recognized at the receiver side PC where it is translated to speech. One of the major challenges for people with hearing impairment is their fettered communication with the outside world. Limited access to technology owing to their inability to communicate has a significant impact on their life. Research is being done on several aspects to enhance their communication with the external world, of which, one is The Gesture Vocalizer that works precisely on hand movement where different languages can be installed. This system conjointly offers high reliability and fast response. Keywords: Raspberry Pi, ASL, Flex Sensor, ATmega328, LCD. I. INTRODUCTION In today s world, there is a continuous demand for automatic appliances with the increase in level of material comfort; there is a sense of urgency for developing circuits that would ease the complexity of life. The communication between audio-vocally impaired people poses a much tedious task. They, throughout the world use sign language to communicate with others; this is possible for those who have undergone special trainings. Common people face difficulties to understand the gesture language. To overcome these real time issues, Gesture Vocalizer system is developed. Whenever the proposed system senses any gesture, it plays corresponding recorded voice. The main purpose of this project is to present a system that can translate real time fingerspelling of the American Sign Language into text. It improves the communication of people with hearing disability. Our system is integrated into a glove that works by sensing the hand movements when making gestures from sign language alphabet, and then it recognizes data and transmits it wirelessly to a PC through internet and displays the corresponding information. In our paper we believe that, with the employment of glove, signs can be created with more accuracy and better consistency. Having a glove also would enable us to practice sign language without having to be next to a computer. The sign language glove appears to bea convenient aid in communication with the deaf. II. RELATED WORK A. ASL Interpreter American Sign Language(ASL) interpreter is a visual language system based on hand gestures in which a glove with flex sensors deals with the 26 letters of the English alphabet and saves into the EEPROM of the microcontroller. The LCD display is employed as a reference for the bendingof each finger to correctly define aletter. The user is expected not to apprehend ASL and can use a table of sign language letters for reference. The product generated as a result can be used at public places like airports, railway stations and counters of banks, hotels etc. where there is communication between different people. B. Low-Cost Gesture Recognition System The system is a prototype of low-cost, open hardware, static - gesture recognition system. The realized system has three major components: A Glove and Sensor Unit consisting of a pair of gloves embedded with tailormade low-cost flex and contact sensors, a Primary Supporting Hardware that tracesvariation in input values, a Secondary Supporting Hardware (SSH) that processes the input values and acknowledges the gesture accurately. The performance of this gesture recognition system is evaluated using a data set comprising of 36 distinctive gestures. These gestures represent a total of 120 gestures that embody all gestures across five globally used sign languages. C. Smart Glove Physically challenged patients or bed-ridden patients typically got to rely on others to control switches for light, fan, TV etc. Remote controls do facilitates such people. But, certain illnesses will not allow the patient to function even a Copyright to IJIREEICE DOI 10.17148/IJIREEICE.2017.5424 128

remote controller, due to lack of flexibility in the movement of the hands. The solution for these people would be a smart glove to be worn on their hands, wherein with a diminutive movement of the finger, a relay can be made to activate a switch. This deviceis designed to help a physically challenged person to perform simple operations like switching on a light, fan etc. These actions can be performed by just a simple gesture like folding a finger and mapping the gestures to a suitable action. III. PROPOSED METHODOLOGY A system which can convert sign language to voice by which vocally impaired people can easily communicate with other people is implemented. The system follows American Sign Language standard (ASL). In automatic sign language translation, one of the main problems is the usage of spatial information in sign language and its proper representation and translation. Such locations are encoded at fixed points in signing space as spatial references for motion events. We present a novel approach with a large vocabulary speech recognition system which is able to recognize sentences of continuous sign language speaker independently.the hardware is implemented by fixing a flex sensor on each finger. The variation in the finger movement can cause a voltage difference in output of flex sensor read by the microcontroller. These voltage variations are compared with lookup table and the data is send to the PC side through Wi-Fi. PC side recognizes the word and translates into speech. The VB script program can be used in PC side and a web browser is used to access the network. IV. BLOCK DIAGRAM The block diagram of the proposed system is given in Fig. 1. The power supply provides a regulated DC voltage. Flex sensors are fitted to each finger. Simple soft contact switches are used for gesture recognition along with the flex sensors. The microcontroller, ATmega 328P analyses the analog signals obtained from the flex sensors. The data obtained by the microcontroller can be viewed on the LCD screen that is interfaced with the ATmega 328.The microcontroller transmits the data based on comparison with the look up tables to Raspberry Pi. The Raspberry pi which is connected to a camera module helps in transfer of the data along with the video of the user to the cloud. The receiver section, Fig. 1(b), primarily consists of a PC connected to the internet using a modem. It is also loaded with Visual Basic programming and the website exclusively designed for the interpretation of the data obtained from the internet.the data thus obtained is transmitted to a loudspeaker for clear audibility of the receiver. The receiver can communicate back using keyboard by using the same website for completing the communication. Fig.1(a) TransmitterSystem Block Diagram Fig.2(b) ReceiverSystem Block Diagram Copyright to IJIREEICE DOI 10.17148/IJIREEICE.2017.5424 129

V. HARDWARE IMPLEMENTATION Flex sensors work as analog voltage dividers that changes resistances depending upon the amount of bend on the sensor. The microcontroller, ATmega 328P has inbuilt ADC and USART data transfer module, that helps in the analysing the analog signals obtained from the flex sensors. The microcontroller is programmed using ARDUINO Uno board.figure 2.1 (a) shows the simulation output of the sign language translator gesturing an alphabet and Figure 2.1(b) shows the Raspberry pi 3 model interfaced along with camera module. The data is then sent to the internet by this module. All the sections were initially simulated using Proteus Design Suite Version 8.0. Fig.2 (a) Hardware Implementation Fig.2 (b) Hardware Implementation VI. SOFTWARE IMPLEMENTATION The system s software was developed using the following software tools. ARDUINO IDE Version 1.7.6 - to program the microcontroller. Visual Basic Version 6 to access the webpage through web browser. Copyright to IJIREEICE DOI 10.17148/IJIREEICE.2017.5424 130

A. Flow Chart for System Software Fig.3 (a) Transmitter System In transmitter section the ATmega microcontroller reads the values from the flex sensors and contact switch, this data iscompared with the stored (calibrated) values. If the values do match to the values corresponding to a character, the character is sent to the raspberry pi module. Else, if the values do not match, recollection of data is done until a match is obtained for all flex sensor values. This process is depicted in Fig. 3(a). In the receiver section data along with real time video is accessed from the internet through the form designed in the Visual Studio. Here a Login IP is required corresponding to that of the raspberry pi module being used. Then if the IP matches the video stream along with the data log can be viewed on the web page hosted by the raspberry pi module. This process is depicted in Fig. 3(b). Fig. 3(b) Receiver System Copyright to IJIREEICE DOI 10.17148/IJIREEICE.2017.5424 131

B. System Software Simulation Model The data sent from the Raspberry module is accessed by the PC from the internet. This web screen is accessed by Visual Studio Application with a window that allows us to enter an IP address corresponding to the Raspberry pi module as shown in Figure 4 (a). Fig. 4 (a) IP Login Screen After the IP address has been provided in the VB screen, the page is redirected to a web page that corresponds to the Raspberry module. Fig. 4 (b) IP Login Screen The screen shows live video streaming along with data log. The data log can be viewed by selecting the View Log link below the video stream. This is shown in Fig. 5. Fig. 5 Web Page VII. CONCLUSION Gesture vocalizer is a hand gesture based interface for promoting communication among normal people and people with speech and hearing disabilities. This Digital Gesture Vocalizer is a social initiative project that helps to overpass the communication gap between normal people and disabled people. The system can be easily implemented. The compact, portable design is its main advantage. The device can be assimilated in a jacket/vest and can even add on to one s vogue statement. It eliminates the need for having interpreter. Thus the project contributes to the upliftment of the Deaf community and ensures that they also lead a life that is no different from the rest, thus breaking down the social stigmas which prevail in our society. This will almost bridge the communication gap present between the deaf community and the normal world. Copyright to IJIREEICE DOI 10.17148/IJIREEICE.2017.5424 132

VIII. ACKNOWLEDGMENT The authors would like to acknowledge the Management, Principal, Head of the department, all the faculties and staff of Electronics and Communication Department, of IES College of Engineering, Thrissur, Kerala, for their aid and technical guidance given during the entire course to complete it providentially. Also we extend our sincere thanks to the entire R & D team of the college for giving the facility to do our project in the R & D lab. REFERENCES [1] HariniSekar, Rajashekar R, GosakanSrinivasan, Priyanka Suresh, VineethVijayaraghavanLow-cost Intelligent Static Gesture Recognition System: (2016 IEEE - 978-1-4673-9519-9/16/) [2] KunalKadam, RuchaGanu, AnkitaBhosekar, Prof.S.D.Joshi American Sign Language Interpreter ; 2012 IEEE Fourth International Conference on Technology for Education [3] Design and development of hand-glove controlled wheelchair: RiniAkmeliawati, Faez S. Ba Tis, Umar J. Wani (Mechatronics (ICOM), 2011 4th International Conference; INSPEC Accession Number: 12094324 DOI: 10.1109/ICOM.2011.5937126) [4] Trudy Suggs, Alpha Teach Yourself American Sign Language in 24 hours Publication Date: December 2, 2003. [5] Jerome M. Allen, Pierre K. Asselin, Richard Foulds, American Sign Language Finger Spelling Recognition System, 0-7803-7767-2/03/$17.00 02003 IEEE, March,2003. [6]Syed Atif Mehdi, YasirNiaz Khan, Sign Language Recognition Using Sensor Glove, Proceedings of the 9th International Conference on Neural InformationProcessing (ICONIP 02), Vol. 5, November 2002. [7] Richard A. Tenant, Marianne Gluszak Brown, The American Sign Language Hand shape Dictionary, Gallaudet University Press, page no.9, Publication Date: June 1, 1998. BIOGRAPHIES Deepa Haridas currently pursuing the Bachelor s degree in Electronics and Communication Engineering from the IES College of Engineering, Thrissur. She was an active member NSS IES chapter for years 2013-17. Drisya Mcurrently pursuing the Bachelor s degree in Electronics and Communication Engineering from the IES College of Engineering, Thrissur. She was an active member NSS IES chapter for years 2013-17. Reshma Johnson currently pursuing the Bachelor s degree in Electronics and Communication Engineering from the IES College of Engineering, Thrissur. Rose Simon currently pursuing the Bachelor s degree in Electronics and Communication Engineering from the IES College of Engineering, Thrissur. Sradha Mohancurrently pursuing the Bachelor s degree in Electronics and Communication Engineering from the IES College of Engineering, Thrissur. LinuBabu P currently working as an Assistant Professor in Department of Electronics and Communication Engineering, IES college of Engineering, Thrissur.Obtained the Bachelor of Engineering Degreefrom Anna University, Tamil Nadu, India and the Master of Technology Degree in Applied Electronics &Communication System from Calicut University,Kerala, India. She has done her M.Tech research project in the field of Steganography, specialized in Data Hiding Technology. Copyright to IJIREEICE DOI 10.17148/IJIREEICE.2017.5424 133