ISSN: [Jain * et al., 7(4): April, 2018] Impact Factor: 5.164

Similar documents
AVR Based Gesture Vocalizer Using Speech Synthesizer IC

International Journal of Advance Engineering and Research Development. Gesture Glove for American Sign Language Representation

Embedded Based Hand Talk Assisting System for Dumb Peoples on Android Platform

Sign Language Interpretation Using Pseudo Glove

Glove for Gesture Recognition using Flex Sensor

Real-time Communication System for the Deaf and Dumb

International Journal of Modern Trends in Engineering and Research e-issn No.: , Date: 2-4 July, 2015

Speech to Text Wireless Converter

Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction

Hand Gestures Recognition System for Deaf, Dumb and Blind People

[Ashwini*, 5(4): April, 2016] ISSN: (I2OR), Publication Impact Factor: 3.785

Smart Gloves for Hand Gesture Recognition and Translation into Text and Audio

Communication Interface for Mute and Hearing Impaired People

Smart Speaking Gloves for Speechless

A Review on Gesture Vocalizer

Haptic Based Sign Language Interpreter

Gesture Recognition using Marathi/Hindi Alphabet

Microcontroller and Sensors Based Gesture Vocalizer

GLOVES BASED HAND GESTURE RECOGNITION USING INDIAN SIGN LANGUAGE

Development of an Electronic Glove with Voice Output for Finger Posture Recognition

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE) Vol 5, Issue 3, March 2018 Gesture Glove

A Survey on Hand Gesture Recognition for Indian Sign Language

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

Hand-Gesture Recognition System For Dumb And Paraplegics

HAND GESTURE RECOGNITION FOR HUMAN COMPUTER INTERACTION

Microphone Input LED Display T-shirt

ISSN: [ ] [Vol-3, Issue-3 March- 2017]

A Review on Hand Gesture Recognition for Deaf and Dumb People Using GSM Module

An Ingenious accelerator card System for the Visually Lessen

Recognition of Voice and Text Using Hand Gesture

A Design of Prototypic Hand Talk Assistive Technology for the Physically Challenged

Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information:

Detection and Recognition of Sign Language Protocol using Motion Sensing Device

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

Real Time Sign Language Processing System

Labview Based Hand Gesture Recognition for Deaf and Dumb People

Hand Gesture Recognition System for Deaf and Dumb Persons

An Approach to Hand Gesture Recognition for Devanagari Sign Language using Image Processing Tool Box

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

SPEECH TO TEXT CONVERTER USING GAUSSIAN MIXTURE MODEL(GMM)

Meri Awaaz Smart Glove Learning Assistant for Mute Students and teachers

N RISCE 2K18 ISSN International Journal of Advance Research and Innovation

Sign Language to English (Slate8)

DETECTION OF MICROBIAL ACTIVITY FOR MILK PRODUCTS USING GAS SENSOR ALONG WITH BLUETOOTH

INDIAN SIGN LANGUAGE RECOGNITION USING NEURAL NETWORKS AND KNN CLASSIFIERS

International Journal of Advances in Engineering & Technology, Sept., IJAET ISSN: Tambaram, Chennai

ABSTRACT I. INTRODUCTION

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Gesture Vocalizer using IoT

The Sign2 Project Digital Translation of American Sign- Language to Audio and Text

VPAT Summary. VPAT Details. Section Telecommunications Products - Detail. Date: October 8, 2014 Name of Product: BladeCenter HS23

Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor

COMPUTER PLAY IN EDUCATIONAL THERAPY FOR CHILDREN WITH STUTTERING PROBLEM: HARDWARE SETUP AND INTERVENTION

Summary Table Voluntary Product Accessibility Template

Assistant Professor, PG and Research Department of Computer Applications, Sacred Heart College (Autonomous), Tirupattur, Vellore, Tamil Nadu, India

SUMMARY TABLE VOLUNTARY PRODUCT ACCESSIBILITY TEMPLATE

Quality Assessment of Human Hand Posture Recognition System Er. ManjinderKaur M.Tech Scholar GIMET Amritsar, Department of CSE

International Journal of Multidisciplinary Approach and Studies

ADVANCES in NATURAL and APPLIED SCIENCES

AVR based Robotic Arm for Speech Impaired People

Apple emac. Standards Subpart Software applications and operating systems. Subpart B -- Technical Standards

Voluntary Product Accessibility Template (VPAT)

Recognition of Hand Gestures by ASL

Translation of Unintelligible Speech of User into Synthetic Speech Using Augmentative and Alternative Communication

Summary Table Voluntary Product Accessibility Template. Supporting Features Not Applicable Not Applicable. Supports with Exceptions.

Avaya one-x Communicator for Mac OS X R2.0 Voluntary Product Accessibility Template (VPAT)

Sentence Formation in NLP Engine on the Basis of Indian Sign Language using Hand Gestures

Voluntary Product Accessibility Template (VPAT)

A Novel Approach for Communication among Blind, Deaf and Dumb People

Computer Applications: An International Journal (CAIJ), Vol.3, No.1, February Mohammad Taye, Mohammad Abu Shanab, Moyad Rayyan and Husam Younis

ECG Signal Based Heart Disease Detection System for Telemedicine Application Using LabVIEW

RASPBERRY PI BASED ECG DATA ACQUISITION SYSTEM

Avaya IP Office R9.1 Avaya one-x Portal Call Assistant Voluntary Product Accessibility Template (VPAT)

SUMMARY TABLE VOLUNTARY PRODUCT ACCESSIBILITY TEMPLATE

Summary Table Voluntary Product Accessibility Template. Supports. Please refer to. Supports. Please refer to

DESIGN OF SMART HEARING AID

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)

Study about Software used in Sign Language Recognition

IDENTIFICATION OF REAL TIME HAND GESTURE USING SCALE INVARIANT FEATURE TRANSFORM

User Guide V: 3.0, August 2017

VPAT for Apple MacBook Air (mid 2013)

A Smart Texting System For Android Mobile Users

Wrist Energy Fuzzy Assist Cancer Engine WE-FACE

VPAT. Voluntary Product Accessibility Template. Version 1.3

As of: 01/10/2006 the HP Designjet 4500 Stacker addresses the Section 508 standards as described in the chart below.

Fujitsu LifeBook T Series TabletPC Voluntary Product Accessibility Template

>Talko. Have your modular speak out the CV, the LPC, 8 bits, vintage way. Page 1

Research Proposal on Emotion Recognition

Summary Table Voluntary Product Accessibility Template. Supporting Features. Supports. Supports. Supports. Supports

Detection of Finger Motion using Flex Sensor for Assisting Speech Impaired

Facial Expression Recognition Using Principal Component Analysis

Assistive Technologies

Sign Language Coach. Pupul Mayank Department of Telecommunication Engineering BMS College of Engg, Bangalore, Karnataka, India

iclicker+ Student Remote Voluntary Product Accessibility Template (VPAT)

Voluntary Product Accessibility Template (VPAT)

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Product Model #:ASTRO Digital Spectra Consolette W7 Models (Local Control)

Heart Rate Monitoring System Using Lab View

Konftel 300Mx. Voluntary Product Accessibility Template (VPAT)

Android based Monitoring Human Knee Joint Movement Using Wearable Computing

Transcription:

IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY IMAGE PROCESSING BASED SPEAKING SYSTEM FOR MUTE PEOPLE USING HAND GESTURES Abhishek Jain *1, Lakshita Jain 2, Ishaan Sharma 3 & Abhishek Chauhan 4 *1,2&3 B. TECH, Department Of ECE, SRMIST 4 Assistant Professor, Department Of ECE, SRMIST DOI: 10.5281/zenodo.1218613 ABSTRACT This paper presents a sign to speech converter for dumb people.[1] In the present world it is very difficult for the dumb people to talk with the ordinary people. So it becomes impossible for them to communicate with the ordinary people unless and until ordinary people like us learn the sign language for the purpose of communication. The sign language of dumb is quite difficult to learn and it is not possible for everybody to learn that language. So every person cannot come and share their thoughts with these physically impaired people. So here is a system which would enable the dumb people to communicate with each and every one.[2] In this system a webcam is placed in front of the physically impaired person. The physically impaired person would put his finger in front of the web camera and the webcam will capture the hand gesture and perform image processing using principle component analysis algorithm (PCA).[3] The co-ordinates captured will be mapped with the one previously stored and accordingly exact picture from the database will be identified. Continuing in this way physically impaired person will be able to go through the entire sentence that he wants to communicate. Later on this sentence will be translated into speech so that it would be audible to everyone.. Keywords: Webcam, Physically Impaired, Gesture, PCA, Image Processing. I. INTRODUCTION Overview The need of this system is to give output in day to day life for Image Processing Based Sign to Speech Converter for Dumb People using PCA algorithm. It will explain the aim and whole declaration for the evaluation of system. It will also explain system constraints, interface and interactions with other external applications. An attempt has also been made to explore about the need and motivation for interpreting ISL, which will provide opportunities for hearing impaired people in industry.[4] The aim of the proposed project is to overcome the challenge of skin color detection for natural interface between user and machine. This project is developed for the physically impaired people and would be beneficial as they can communicate with everyone. In our system a webcam is placed in front of the physically impaired person.[5] The physically impaired person will place a finger with particular action in front of the camera.when he makes the gestures, the webcam will capture the exact positions of the fingers and perform image processing using principle component analysis algorithm. The co-ordinates captured will be mapped with the one previously stored and accordingly exact image from the database will be identified. [6]Continuing in this way physically impaired person will be able to go through the entire sentence that he wants to communicate. Later on this sentence will be translated into speech so that it would be audible to everyone. [7]By using this system the physically impaired people would be benefited as they can communicate with everyone freely which indeed would be great achievement for the mankind. Objective The great challenge lies in developing an economically feasible and hardware independent system so that physically impaired people can communicate easily. Datasheet of all the hand gestures will be made beforehand. Then, using matlab programing the real time picture of sign will be captured and will be compared with the datasheet. (photo captured will be converted into binary image) [368]

Then Matlab will give the output to the Arduino and that output will be in accordance with the matched picture. Then from the arduino, this output will be given to the APR circuit which have 5 inputs. All the 5 inputs will be having a different message stored. Then in accordance with the output received by the arduino, respective message will be played. At the end there is an amplifier that is being used to amplify the message There is a speaker through which message can be easily heard Advantages Low cost Compact systems Flexible to users It takes less power to operate system Applications Gesture recognition and conversion. As a translating device for Mute people. It can be used for Mobiles for SMS sending. Translation of sign language in many regional languages. II. EARLIER WORKS Ibrahim Patel And Dr. Y. Srinivas Rao proposes an Automated speech synthesizer and converter in cue symbol generation for hearing impairs. For the interaction of normally speaking personals with hearing impairs the communication gap leads to a non-interactive medium for the two communicators. To develop a communication approach in this paper we propose a medium for the conversion of speech signal to visual cue symbols by automatically synthesizing the given speech signal and mapping to cue symbols for visual representation. In this system is focused with the objective of reducing the communication gap between normal people and vocally disabled. Anbarasi Rajamohan, Hemavathy R. Dhanalakshmi proposes a Deaf-Mute Communication converter. Deafmute person has always found it difficult to communicate with normal people. The project aims to facilitate people by means of a glove based deaf-mute communication interpreter system. For each specific gesture, the flex sensor produces a proportional change in resistance and accelerometer measures the movement of hand. The glove includes two modes of operation training mode to benefit every user and an operational mode. III. MATERIALS AND METHODS Arduino IDE: The open-source Arduino Software (IDE) makes it easy to write code and upload it to the board. It runs on Windows, Mac OS X, and Linux. The environment is written in Java and based on Processing and other open-source software.[8] Arduino Uno: The Arduino Uno is a microcontroller board based on the ATmega328 (datasheet). It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz crystal oscillator, a USB connection, a power jack, an ICSP header, and a reset button.[9] In some textile-based projects, this is the most practical tool to maintain the hang of the fabric. Educationally, it s a very safe and unintimidating way to learn how to use embedded electronics.[10] Digital Voice Processor Using Apr 9600. Digital voice recording chips with different features and coding technology for speech compression and processing are available on the market from a number of semiconductor manufacture.[11] APR 9600 single chip voice recorder and playback device from A plus integrated circuits makes use of a proprietary analogue storage technique implemented using flash non-volatile memory process in which each cell is capable of storing up to 256 voltage levels. Speaker: For audio amplification and audio output [369]

Fig.1. Block diagram of system Firstly database will be created beforehand and then real time image will be captured and will be matched from the database and then using matlab programming, output will be given to arduino which processes the information for every explicit gesture created.[12] The controller has 2 modes of operation coaching mode and operational mode. In coaching mode the gesture is formed by the user and also the voltage levels area units keep in EEPROM.[13] In operational mode the information is being compared with predefined values and also the matched gestures area unit sent with text to speech conversion module i.e APR circuit where all the messages will be pre-stored and corresponding output will be played and can be easily heard in the speaker. IV. PROPOSED METHODOLOGY For implementation of this system we require different hand gestures and web camera is required for capturing the gestures. The person would be placing different gestures in front of the camera. When the user makes the gesture of a symbol while implementing system there are different modules involve in the system are as follows: Fig.2. Proposed System of Sign Recognition 1. Generation of Database : First of all while processing the images, it is necessary to prepare a proper database of 5 images of each symbol total of 40 images are captured in order to increase the accuracy for identification of images. [370]

Fig 3- Input Database 2. Image pre-processing and segmentation: Preprocessing consists of image acquisition, segmentation and morphological filtering methods. Then the Segmentation of hands is carried out to separate object and the background. PCA algorithm is used for segmentation purpose. The segmented hand image is represented certain features. These features are further used for gesture recognition Morphological filtering techniques are used to remove noises from images so that we can get a smooth contour. 3. Feature extraction :It is a method of reducing data dimensionality by encoding related information in a compressed representation. The selection of which features to deal with are centroid, skin colour and principle component analysis as the main features. 4. Sign Recognition: It uses principle component algorithm analysis to identify the image from the database. The PCA algorithm involves 2 phases: Training phase Recognition phase In training mode, each gesture is represented as a column vector. Then PCA finds the eigen vectors of the covariance matrix of gestures and then they are multiplied by each of their gesture vectors to obtain their gesture space projections. In the recognition phase, a subjected gesture is projected onto gesture space and the euclidean distance is computed between this projection and all known projections. [371]

Fig 4- Output Matching w.r.t. Input Database [372]

5. Sign to voice conversion- The identified image is converted into voice or speech signal using APR 9600 and speaker. V. RESULTS AND CONCLUSION This paper proposes an electronic design that can be used for communication between deaf, mute people and normal people. The following remarks could be the summary of the findings from this work. The design is more compatible and faster responsive when compared to existing design using PCA algorithm A responsive time of 2 to 3 seconds. More Compact and portable. Efficient communication between differently abled (deaf in this context) and normal people. Assign language involves different gestures and hand movement, improves small motor skills of differently abled people. A mobile application can be built for the efficient use of the design and to make it user-friendly. Fig 5: Proposed hardware VI. CONCLUSION The proposed method is tested on different gestures. It produce fairly stable and good results Every person cannot come and share their thoughts with these physically impaired people. So we have come up with a system which would enable the dumb to communicate with each and every one by using the image processing based language converter and sign language recognition system proposed for human computer interaction using Image Processing Technique VII. REFERENCES [1] Ibrahim Patel1, Dr. Y. Srinivas Rao2 proposes a Automated speech synthesizer and converter in cue symbol generation for physically impairs. International Journal of Recent Trends in Engg, Vol 2, No. 7, Nov. 2009. [2] Anbarasi Rajamohan, Hemavathy R., Dhanalakshmi proposes a Deaf-Mute Communication Interpreter. Proceedings of the International MultiConference of Engineers and Computer Scientists 2009 Vol I IMECS 2009, March 18-20, 2009, Hong Kong [3] A new instrumented approach for translating American Sign Language into sound and text by Hernandez-Rebollar, J. L. ; Kyriakopoulos, N. ; Lindeman, R.W. Automatic Face and Gesture Recognition, 2004. Proceedings. Sixth IEEE International Conference, Publication Year: 2004. [4] Sign language recognition using sensor gloves by Mehdi, S.A.; Khan, Y. N. Neural Information Processing, 2002.ICONIP '02. Proceedings of the 9th International Conference,Volume: 5 Publication Year: 2002, IEEE Conference Publications. [5] Kunal Kadam, Rucha Ganu, Ankita Bhosekar, Prof. S. D. Joshi, American Sign Language Interpreter, Proceedings of the 2012 IEEE Fourth International Conference on Technology for Education [6] Satjakarn Vutinuntakasame, An Assistive Body Sensor Network Glove for Speech- and Hearing- Impaired Disabilities, Proceedings of the 2011 IEEE Computer Society International Conference on Body Sensor Networks. [373]

[7] Deaf-Mute Communication Interpreter by Anbarasi Rajamohan, Hemavathy R., Dhanalakshmi M. International Journal of Scientific Engineering and Technology (ISSN : 2277-1581) Volume 2 Issue 5, pp : 336-341, 1 May 2013 [8] Microcontroller and Sensors Based Gesture Vocalizer by Salman Afghani, Muhammad Akmal, Raheel Yousaf. Proceedings of the 7th WSEAS International Conference on signal processing, robotics and automation [9] Yongjing Liu, Yixing Yang, Lu Wang, Jiapeng Xu, Image Processing and Recognition of Multiple Static Hand Gestures for Human-Computer Interaction, IEEE 2013 Seventh International Conference on Image and Graphics. [10] Archana S. Ghotkar, Rucha Khatal, Sanjana Khupase, Hand Gesture Recognition for Indian Sign Language, IEEE 2012 International Conference on Computer Communication and Informatics (ICCCI -2012). [11] P. Subha Rajam, Dr. G. Balakrishnan, Real Time Indian Sign Language Recognition System to aid Deaf-dumb People, 2011 IEEE. [12] Wang Jingqiu1, Zhang Ting1,2 An ARM-Based Embedded Gesture Recognition System Using a Data Glove,IEEE Transactions 2014 [13] Harshith.C1, Karthik.R.Shastry2, Manoj Ravindran3, M.V.V.N.S Srikanth4, Naveen Lakshmikhanth, Survey on various gesture recognition techniques for interfacing machines based on Ambient intelligence (IJCSES) Vol.1, No.2, November 2010 CITE AN ARTICLE Jain, A., Jain, L., Sharma, I., & Chauhan, A. (n.d.). IMAGE PROCESSING BASED SPEAKING SYSTEM FOR MUTE PEOPLE USING HAND GESTURES. INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY, 7(4), 368-374. [374]