Translation of Unintelligible Speech of User into Synthetic Speech Using Augmentative and Alternative Communication

Similar documents
Speech to Text Wireless Converter

AVR Based Gesture Vocalizer Using Speech Synthesizer IC

Hand-Gesture Recognition System For Dumb And Paraplegics

Development of an Electronic Glove with Voice Output for Finger Posture Recognition

Sign Language Interpretation Using Pseudo Glove

International Journal of Modern Trends in Engineering and Research e-issn No.: , Date: 2-4 July, 2015

International Journal of Advances in Engineering & Technology, Sept., IJAET ISSN: Tambaram, Chennai

Embedded Based Hand Talk Assisting System for Dumb Peoples on Android Platform

A Design of Prototypic Hand Talk Assistive Technology for the Physically Challenged

Glove for Gesture Recognition using Flex Sensor

Smart Speaking Gloves for Speechless

International Journal of Modern Trends in Engineering and Research e-issn No.: , Date: 2-4 July, 2015

Android based Monitoring Human Knee Joint Movement Using Wearable Computing

Communication Interface for Mute and Hearing Impaired People

Sound Interfaces Engineering Interaction Technologies. Prof. Stefanie Mueller HCI Engineering Group

AI Support for Communication Disabilities. Shaun Kane University of Colorado Boulder

Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information:

Gesture Recognition using Marathi/Hindi Alphabet

A Review on Hand Gesture Recognition for Deaf and Dumb People Using GSM Module

A Review on Gesture Vocalizer

Smart Gloves for Hand Gesture Recognition and Translation into Text and Audio

Recognition of Voice and Text Using Hand Gesture

An Ingenious accelerator card System for the Visually Lessen

Getting Started with AAC

Communications Accessibility with Avaya IP Office

Microphone Input LED Display T-shirt

Assistant Professor, PG and Research Department of Computer Applications, Sacred Heart College (Autonomous), Tirupattur, Vellore, Tamil Nadu, India

ALS & Assistive Technology

Microcontroller and Sensors Based Gesture Vocalizer

The power to connect us ALL.

International Journal of Advance Engineering and Research Development. Gesture Glove for American Sign Language Representation

Haptic Based Sign Language Interpreter

Cisco Unified Communications: Bringing Innovation to Accessibility

A Year of Tips for Communication Success

A Smart Texting System For Android Mobile Users

Title: Symbol-Infused Play for Young Children with Complex Communication Needs

Real Time Sign Language Processing System

Avaya IP Office R9.1 Avaya one-x Portal Call Assistant Voluntary Product Accessibility Template (VPAT)

Research Proposal on Emotion Recognition

Hand Gesture Recognition System for Deaf and Dumb Persons

Real-time Communication System for the Deaf and Dumb

Hand Gestures Recognition System for Deaf, Dumb and Blind People

Psychology Perception

AAC: BREAKING DOWN THE BASICS. Kati Skulski, M.S., CCC-SLP

User Manual Verizon Wireless. All Rights Reserved. verizonwireless.com OM2260VW

Voice Switch Manual. InvoTek, Inc Riverview Drive Alma, AR USA (479)

ADVANCES in NATURAL and APPLIED SCIENCES

easy read Your rights under THE accessible InformatioN STandard

Detection and Recognition of Sign Language Protocol using Motion Sensing Device

Hand Gesture Recognition In Real Time Using IR Sensor

GLOVES BASED HAND GESTURE RECOGNITION USING INDIAN SIGN LANGUAGE

Available online at ScienceDirect. Procedia Technology 24 (2016 )

Avaya IP Office 10.1 Telecommunication Functions

5/2018. AAC Lending Library. Step-by-Step with Levels. ablenet. Attainment Company. Go Talk One Talking Block. ablenet. Attainment.

Hybrid EEG-HEG based Neurofeedback Device

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment

Text Entry in Augmentative and Alternative Communication

[Ashwini*, 5(4): April, 2016] ISSN: (I2OR), Publication Impact Factor: 3.785

The ipad and Mobile Devices: Useful Tools for Individuals with Autism

Avaya one-x Communicator for Mac OS X R2.0 Voluntary Product Accessibility Template (VPAT)

Avaya Model 9611G H.323 Deskphone

Portable Stimulator Design Aimed at Differentiating Facial Nerves From Normal Tissues

easy read Your rights under THE accessible InformatioN STandard

VPAT Summary. VPAT Details. Section Telecommunications Products - Detail. Date: October 8, 2014 Name of Product: BladeCenter HS23

PORTABLE CAMERA-BASED ASSISTIVE TEXT AND PRODUCT LABEL READING FROM HAND-HELD OBJECTS FOR BLINDPERSONS

ITU-T. FG AVA TR Version 1.0 (10/2013) Part 3: Using audiovisual media A taxonomy of participation

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE) Vol 5, Issue 3, March 2018 Gesture Glove

Applications to Assist Students with. ASD: Promoting Quality of Life. Jessica Hessel The University of Akron. Isabel Sestito The University of Akron

Voluntary Product Accessibility Template (VPAT)

Copyright 2008 Communication Matters / ISAAC (UK)

Summary Table: Voluntary Product Accessibility Template

Suppliers' Information Note. Next Generation Text Service also called Text Relay. Service Description

Apple emac. Standards Subpart Software applications and operating systems. Subpart B -- Technical Standards

Audiovisual to Sign Language Translator

In Young Children with ASD Kristy Benefield Speech-Language Pathologist St. Tammany Parish School System

Networx Enterprise Proposal for Internet Protocol (IP)-Based Services. Supporting Features. Remarks and explanations. Criteria

Interact-AS. Use handwriting, typing and/or speech input. The most recently spoken phrase is shown in the top box

Fujitsu LifeBook T Series TabletPC Voluntary Product Accessibility Template

Voluntary Product Accessibility Template (VPAT)

Networx Enterprise Proposal for Internet Protocol (IP)-Based Services. Supporting Features. Remarks and explanations. Criteria

Critical Review: Do Speech-Generating Devices (SGDs) Increase Natural Speech Production in Children with Developmental Disabilities?

Summary Table Voluntary Product Accessibility Template. Supporting Features. Supports. Supports. Supports. Supports

ios Accessibility Towards Universal Design

Welcome. We will begin shortly Please run an audio check: To view captions: Please complete the survey following the webinar

Available online at ScienceDirect. Procedia Computer Science 42 (2014 ) 70 77


Assistive Technology for Regular Curriculum for Hearing Impaired

Application of Phased Array Radar Theory to Ultrasonic Linear Array Medical Imaging System

(12) Patent Application Publication (10) Pub. No.: US 2004/0267,527 A1

Hear Better With FM. Get more from everyday situations. Life is on

Summary Table Voluntary Product Accessibility Template

Avaya G450 Branch Gateway, R6.2 Voluntary Product Accessibility Template (VPAT)

Glossary of Inclusion Terminology

Product Model #: Digital Portable Radio XTS 5000 (Std / Rugged / Secure / Type )

PC BASED AUDIOMETER GENERATING AUDIOGRAM TO ASSESS ACOUSTIC THRESHOLD

Requirements for Maintaining Web Access for Hearing-Impaired Individuals

ISSN: [Jain * et al., 7(4): April, 2018] Impact Factor: 5.164

Summary Table Voluntary Product Accessibility Template. Supports. Please refer to. Supports. Please refer to

The following information relates to NEC products offered under our GSA Schedule GS-35F- 0245J and other Federal Contracts.

Avaya 3904 Digital Deskphone Voluntary Product Accessibility Template (VPAT)

Transcription:

Translation of Unintelligible Speech of User into Synthetic Speech Using Augmentative and Alternative Communication S.Jasmine Vanitha 1, K. Manimozhi 2 P.G scholar, Department of EEE, V.R.S College of Engineering and Technology, Arasur, India 1. PG scholar, Department of EEE, V.R.S College of Engineering and Technology, Arasur, India 2. ABSTRACT: This paper provides an innovative form of Translation of unintelligible speech of user into synthetic speech using augmentative and alternative communication (AAC) device. This AAC device converts the meaningless speech into understandable speech.the development of this system is carried out by building small vocabulary from both the utterance and from the inaudible speech of the autism people. Experiments showed that this method is successful in generating good recognition and translation performance on highly unintelligible speech, even when recognition perplexity is increased. KEYWORDS: Augmentative and alternative communication, Automatic voice recognition, autism, interpret. I. INTRODUCTION Consider a world without a sound, a world where we cannot tell someone what we want or a world where we cannot get the feedback for what we are doing, it is impossible to survive in that situation. The similar feelings are felt by the people who have unintelligible.since the speech of autism is not understand by relatives or neighbours. In order to solve this problem automatic speech recognition was introduced. In prevailing methods of recognition system, the people used mouse, keyboard as input to deliver their messages to their relatives and neighbour. But that method was inconvenient to autism as they prefer to speak rather than an aid. Thereby the objective of this paper is to introduce such a recognition system.the augmentative and alternative communication device is used as a mediator between the autism and their relative. This device translates the unintelligible speech of the user into a synthetic speech. The AAC device is divided into two types. One is aided technique and second one is unaided technique. In aided technique, there are two types. They are low tech and high tech.low TECH- The pictures, shapes, numbers are used to convey the messages. HIGH TECH- The electronic instruments are used to convey their messages to relatives.in unaided technique, the autism uses their hands and makes gesture to convey their messages. Therefore the unaided technique used for people without the speaking ability. II. RELATED WORKS Many development systems in surviving method use independent speech recognition system. The independent system can recognize any speech of the user and thus these models are trained on large corpus data recorded by many speakers. The capability of recognition in independent system is less as many speakers use the same system. The resultant output is also not accurate and satisfied. And this independent automatic speech recognition system is unsuitable for speaker with severe speech disorders because the amount of material available for training is severely limited (as speaking often requires great effort), the material is highly variable, often has a limited phonetic repertoire, and is too different from the normal speech used in training speaker independent models for many conventional adaptation techniques to be of assistance. Hence a new methodology of dependentspeech recognition system is introduced to make the resultant output accurate and satisfied. Copyright to IJIRCCE 10.15680/ijircce.2015.0302016 715

The projected system overcomes the disadvantages of surviving methods by using the dependent speech recognition system. The projected system is suitable for two types of users. One is the user with partially speaking ability and another one is the speaker without speaking ability. And if any emergency situation occurs to the partially speaking user, an alert message is generated and sent to the number which is already saved. The partially speaking user uses the high tech of the aided system in augmentative and alternative communication device. The speaker without speaking ability uses the unaided technique in augmentative and alternative communication device. III. MAIN CONCEPT OF THE PAPER The recognition system in this paper supports two kinds of users that are it supports both the autism people and dumb people. The autism person has a partially speaking ability so the process of conveying their message plays a very difficult role. Consequently to make the task easier speech recognition IC and speaker module are used.the speech recognition IC recognize the speech input from the user through the microphone and this recognized unintelligible speech of the user is translated into synthetic speech using the speaker module. Finally the output of partially speaking user is delivered by speaker and LCD display. Fig. 1 Speech recognition system In addition to autism person the recognition system also supports the people without speaking ability. For that reason they use action to convey their message to their relatives. The process of recognizing the action is performed by the sensor and this message is translated into a speech using the speaker module. Finally the output is delivered through the speaker.in addition to this, if any emergency situation occurs for partially speaking user an alert message is engendered and directed to the mobile number which is already stowed. The alert message is delivered to the number using the AT commands. IV. PROPOSED SYTEM The words spoken by human being causes some vibration in air known as sound waves. These sound waves are continuous or analog signal are translated into electrical signal by microphone. HM2007 is recognition IC and it performs two functions. One is training the words and other one is clearing the trained words. For both functions keypad is used to save word with corresponding number.for example: Press 1 (display will show 01 and the LED will turn off) on the keypad, then press the TRAIN key (The LED will turn on) to place circuit in training mode, for word one. Say the target word into the on-board microphone (near LED) clearly. The circuit signals acceptance of the voice input by blinking the LED off then on. The word (or utterance) is now identified as the 01 word. If the LED did not flash, start over by pressing 1 and then TRAIN key. Copyright to IJIRCCE 10.15680/ijircce.2015.0302016 716

Fig.2 Block Diagram of speech and sign recognition system. Testing Recognition: Repeat a trained word into the microphone. The number of the word should be displayed on the digital display. For instance, if the word directory was trained as word number 20, saying theword directory into the microphone will cause the number 20 to be displayed.the peripherals are controlled by the microcontroller. The microcontroller sends the input of partially speaking user from HM2007 to speaker module. This module operates as a recording and playback system. The each words spoken by partially speaking user has a corresponding phrase which is already stored in the speaker module. Hence the phrase is engendered for corresponding input.for example: In training mode, number 1 is pressed and an input drink tea is given. For that corresponding 1 input a phrase can I have a tea is trained in APR module. In recognition mode when the user says the word drink tea then automatically the phrase can I have a tea is played out by loudspeaker. An accelerometer is an electromechanical device that will measure acceleration forces. These forces may be static, like the constant force of gravity pulling at your feet, or they could be dynamic - caused by moving or vibrating the accelerometer. An accelerometer is a device that measures the vibration, or acceleration of motion of a structure. The force caused by vibration or a change in motion (acceleration) causes the mass to "squeeze" the piezoelectric material which produces an electrical charge that is proportional to the force exerted upon it. Since the charge is proportional to the force, and the mass is a constant, then the charge is also proportional to the acceleration. The piezoelectric crystal produces an electrical charge which is connected directly to the measurement instruments. The charge output requires special accommodations and instrumentation most commonly found in research facilities. This type of accelerometer is also used in high temperature applications (>120C) where low impedance models cannot be used.triaxial accelerometers measure the vibration in three axes X, Y and Z. They have three crystals positioned so that each one reacts to vibration in a different axis. The output has three signals, each representing the vibration for one of the three axes. V. SIMULATION RESULTS The software result is obtained for two operation modes. One is the emergency situation of partially speaking user and another one is normal situation of partially speaking user.fig.3 shows the entire view of simulation result. In that image the two switches are used as an input which designate the operation modes. The first switch gives the emergency operation mode and the switch below is responsible for normal operation mode of partially speaking user. Copyright to IJIRCCE 10.15680/ijircce.2015.0302016 717

Fig.3 view of entire simulation diagram The minute the first switch is pressed, the input from the user is given to pic microcontroller. The controller then checks whether the input matches emergency situation message. If it counterparts the condition it display the message in LCD display and also gives an alert message to already kept number using AT commands. This result is shown in fig 4. Fig.4 Emergency situation output The Fig 5 shows the output of partially speaking user. This operation is designated by pressing the lower switch. As soon as the switch is pushed the input is given to microcontroller. The LCD display the input controlled by microcontroller if the messages matches with trained words. And consequently the signal is asserted high in the oscilloscope signifying an acceptance of the input. If the switch is released then signal becomes low in oscilloscope. Copyright to IJIRCCE 10.15680/ijircce.2015.0302016 718

Fig.5Normal situation of partially speaking user. VI. CONCLUSION Thus the paper Translation of unintelligible speech of user into synthetic speech using augmentative and alternative communication device solves the problem of people who have great difficulty to speak with people. Additionally it also resolves the difficult of dumb people. The enhancement of the system can be made for user without speaking ability. A sixth sense technology can be engenderedpredominantlyfor dumb people to satisfy their requirements rapidly. And further development should concentrate on storage of trained words so that the number of trained words can be increased. REFERENCES 1. Anetha K, RejinaParvin J. Hand Talk-A Sign Language Recognition Based On Accelerometer and SEMG Data Vol. 2, Special Issue 3, July 2014. 2. B. O Keefe, N. Kozak, and R. Schuller, Research priorities in augmentative and alternative communication as identified by people who use AAC and their facilitators, Augmentative Alternative Commun.,vol. 23, no. 1, pp. 89 96, 2007. 3. Black and K. Lenzo, Flite: A small fast run-time synthesis engine, in 4th ISCA Speech Synthesis Workshop, 2001, pp. 157 162. 4. D. Beukelman and P. Mirenda, Augmentative and Alternative Communication, 3rd ed. Baltimore, MD: Paul H. Brookes, 2005. 5. J. Murphy, I prefer contact this close: Perceptions of AAC by people with motor neurone disease and their communication partners, AugmentativeAlternative Commun., vol. 20, pp. 259 271, 2004. 6. J. Todman, Rate and quality of conversations using a text-storage AAC system: Single-case training study, Augmentative Alternative Commun., vol. 16, no. 3, pp.164 179, 2000. 7. J. Todman, N. Alm, J. Higginbotham, and P. File, Whole utterance approaches in AAC, Augmentative Alternative Commun., vol. 24, no. 3, pp. 235 254, 2008. 8. L. R. Rabiner and B. H. Juang, An introduction to hidden Markovmodels, IEEE ASSP Mag., Jun. 1986. 9. M.Parker, S.P.Cunningham, P. Enderby, M. S. Hawley, and P. D. Green, Automatic speech recognition and training for severely dysarthric users of assistive technology: The STARDUST project, Clin. Linguistics Phonetics, vol. 20, no. 2 3, pp. 149 156, 2006. 10. M.S.Hawley, S.P.Cunningham, F.Cardinaux, A.Coy, S.Seghal, and P. Enderby, Challenges in developing a voice input voice output communication aid for people with severe dysarthria, in Proc. AAATE Challenges Assistive Technol., 2007, pp. 363 367. 11. P. D. Green, J. Carmichael, A. Hatzis, P. Enderby, M. S. Hawley, and M. Parker, Automatic speech recognition with sparse training data for dysarthric speakers, in Eur. Conf. Speech Commun. Technol. (Eurospeech), 2003, pp. 1189 1192. 12. P. Enderby and L. Emerson, Does Speech and Language Therapy Work. London, U.K.: Singular, 1995. 13. S. Young et al., Cambridge University Engineering Department TheHTK book, 2006. Copyright to IJIRCCE 10.15680/ijircce.2015.0302016 719