Research Proposal on Emotion Recognition

Size: px
Start display at page:

Download "Research Proposal on Emotion Recognition"

Transcription

1 Research Proposal on Emotion Recognition Colin Grubb June 3, 2012 Abstract In this paper I will introduce my thesis question: To what extent can emotion recognition be improved by combining audio and visual information? In addition to covering background on audio information, I will introduce new information on image processing and some work that has been done in the field. I will then discuss methodologies for combining the two sources of information and evaluating them. Introduction Robots and computers have already become a prominent aspect of our lives, and their presence will only continue to grow, giving way to unique technologies. However, there are numerous obstacles to overcome before robots can interact fluidly with humans on a day to day basis. Imagine a robot that can act as a psychiatrist. This robot can interpret a patient s emotions and formulate an appropriate response. Reading emotions is a complicated process, but one that humans are very good at. Humans can fuse both visual information (a scowl on a person s face) and audio information (loud and intense speech) in order to gauge an emotion such as anger. If robots and computers are to be able to interact with humans effectively in scenarios such as the one suggested above, they need to be able to process both audio and visual information in order to produce a single output. Audio Information One of the major tasks in spoken dialogue systems is speech recognition, the act of converting from spoken word to text that a system can then interpret. The speech begins as an acoustic signal, which is then converted into a digital form, which is ultimately turned into phonemes that the system uses to create words [4]. Like other aspects of natural language processing, there are many difficulties in speech recognition. Speech recognition has many useful applications, one of which is emotion recognition. The ability to recognize a speaker s emotional state has many potential applications and numerous projects have been undertaken in the area. There are many existing applications in the real world that make use of emotional recognition, as well as numerous interesting areas of research. The most obvious application of emotional recognition is to classify a users emotional state. A more specific type of this research is dividing a users emotional state between two categories. One study of this nature was conducted as early as 1999 by researcher Valery Petrushin, in which researchers constructed recognizers that could be used in these types of applications, in which the system could recognize the speaker as agitated or calm. Emotion recognition is particularly important for its application in call centers; monitoring the user s frustration level is important for quality service, and this system was was used for a automated call center that could prioritize calls [6]. In this system, neural networks were trained using a small corpus of telephone messages, a portion of which contained angry sentences. A later study, conducted by research Chul Min Lee, involved collecting data and speech from a call center and creating recognizers that accounted for language and discourse information in addition to acoustic information [3]. Another field of applications lies in online emotion recognition, and a system called EmoVoice has been used in numerous applications such as Greta, a virtual agent which recognizes a user s emotion and mirrors it. [9]. 1

2 Many commonalities exist between research projects and applications in the area of emotion recognition, including the features of voice used to classify emotion, the way in which the features are extracted, and how the recognizers themselves are constructed. When analyzing emotional state, there are numerous features of speech that can be analyzed in order to classify emotion. Prosodic information is important for both humans and computers to identify a particular emotional state. Prosody refers to information such as pitch, loudness, and rhythm and can contain information about attitude [4]. One of the most common features of speech that is used to classify emotion is the pitch of a speaker s voice; a study conducted by researcher Bjrn Schuller in 2003 used features of the pitch of speaker s voice to classify a speaker s emotion. Pitch contains a large amount of information about a speaker s emotional state [8]. However, while prosodic information has long been important to emotion, the study conducted by Chul Min Lee in 2005 came up with a method for identifying certain words as being important to particular emotions. This study found that the addition of lexical and discourse information improved the system s ability to correctly identify an emotional state [3]. In creating a speech recognizer, the system must be trained to recognize particular emotions, using the features being used for the study. Typically, a corpus of sentences is gathered with sentences being pronounced with emotion. Structurally, Hidden Markov Models have been widely used in the construction of speech recognition systems [4]. Neural Networks have also been trained via backpropagation to recognize particular emotions. When creating a system that can classify a speaker s emotional state, the simplest way to judge the system s performance is to keep track of how often the system correctly identifies the correct emotion. It is important to keep track not only if the recognizer identified the correct emotion, but which emotions are being misidentified more often than others. A prominent commonality between results of previous studies is that anger is the easiest emotion to recognize, whereas fear is the hardest for recognizers (and humans) to correctly identify [6]. Visual Information Visual processing has two main commonalities with audio recognition; systems in both fields must extract important features from the input source in order to formulate an answer about the inputted emotion, and systems from both areas must undergo training in order to give the appropriate outputs for a given input. Like audio recognition, a large amount of research has been conducted in image evaluation and means of improving the processing, particularly with faces. Numerous databases are freely available over the internet. A study conducted at Union College by Shane Cotter utilized a database called the Japanese Female Facial Expression Database as a means of input. [1] [2]. This study focused on analyzing regions of the face individually, rather than the face as a whole, and then combining the information from selected regions together in order to classify emotions. This study found that this method was an improvement over analyzing the face as a whole. Some basic hands on research has also been conducted on image processing. For my project in CSC333 - Introduction to Parallel Computing, I am writing a program that finds intakes a series of image files and analyzes each of the pictures, calculating the center of mass in the X and y direction. I obtained a freely downloadable database of faces from the University of Sheffield s Image Processing Laboratory [5]. The files are a PGM format, which stands for Portable Gray Map, and the file format is designed to be easy to edit; the pixel information is contained within a 2-D array within the file [7]. For the parallel computing class, I intend to analyze this corpus of faces, counting the number of black, white, and greyscale pixels, and also attempt to analyze the concentration of black pixels in the images. This project is not quite on par with some of the research being done in image processing, it is an interesting begininning to image processing. The Process The analyzation process will involve several steps. A video feed will be taken of a user speaking with an emotional undertone. The video stream will be split into two separate inputs: a sound clip of the user s speech, and several, or potentially one, frame(s) chosen from the video stream. How the particular frame, or frames, is chosen is yet to be determined; a selection method could be developed, or they could be chosen at random. After the two inputs have been chosen, two separate recognizer systems, one for audio recognition and one for visual recognition, shall be applied to the inputs to extract important features and produce an output. The EmoVoice framework shall be used for audio recognition; the visual processing software/algorithm has yet to be selected. Another possibility to consider is the combination of the two systems in some way, so that instead of producing two separate outputs and comparing them, they would 2

3 produce a single output. This possibility is only a speculation at this point. Figure 1.1 The process to analyze emotional state (Face images [2]) Testing and Evaluation To train the systems, a large amount of video data will have to be gathered and fed to the systems. A similar process will be followed to test them. There are several issues that have to be considered when evaluating the system, one of which is the form of the output that a system can produce. One method of analyzing output, such as the method Shane Cotter implements in his occluded facial study [1], showed the success rates of several methods of facial analysis. Another method of output, such as the output used by the Virtual Agent Greta, which implements EmoVoice [9], outputs what the system identifies the emotion to be. It will be important to keep track of failure rates to see which emotions the systems have trouble identifying. Another issue to consider is conflicting output; if the systems identify their inputs to have different emotions, several questions must be asked: Which system was right? Are they both wrong? If one system is wrong, which emotion did it identify? Is one system, or both, misidentifying particular emotions more than others? Audio recognition might be better at than visual processing at recognizing certain emotions, and visual processing could perform better in other cases. As previously stated, certain emotions have been easier (and harder) for humans and systems, so it will be interesting to see if this study follows these trends. Another consideration that must be taken into account when analyzing the data and comparing the performance of the two systems is the personalization of emotion expression. For example, a particular user might express anger strongly in their voice, but not in their facial expression, and vice versa. Conclusion The combination of audio and visual recognition is a fascinating task. A great deal of research has been conducted in both areas, giving a good foundation upon which to start research. Overall, there is still a good amount of research to be done and design choices to flesh out, particularly in the visual processing realm and in the selection and usage of already existing recognizers. While the basic process of analysis has been laid out, there is still a great potential for change and modification. The research question is likely to remain the same. The project should present some interesting challanges, and should also produce some interesting data. At this point, the research has gone quite well and hopefully it will continue to proceed smoothly as the main portion of the thesis begins. 3

4 References [1] Shane Cotter. Recognition of occluded facial expressions using a fusion of localized sparse representation classifiers. In Digital Signal Processing Workshop and IEEE Signal Processing Education Workshop (DSP/SPE), 2011 IEEE, pages , This paper was a recent study on studying regions of faces in order to combine information from each region and classify the facial expression of the image. I still only have a basic understanding of visual processing so I will likely need to read additional sources as well as examine this source in more detail. [2] Miyuki Kamachi. The japanese female facial expression (jaffe) database. This is the database of images of various facial expressions used by Shane Cotter in his research on occluded facial expressions. This database is freely downloadable. - [3] Chul Min Lee. Toward detecting emotions in spoken dialogs. In IEEE Transactions on Speech and Audio Processing, volume 13, pages , This paper stuck out from the others because their study attempted to analyze more than just acoustic information (lexical and discourse) in order to classify emotions for several reasons (ex: finding that certain words were often associated with a particular emotion). Their study showed improved performance when combining other information categories. It is certainly interesting, but I am not sure if I have the time to look at more than acoustic signals. [4] Michael F. McTear. Spoken Dialogue Technology: Toward the Conversational User Interface. Springer, This book s section on speech recognition offers a good overview on the procedures and difficulties of recognizing speech, as well as touching upon Hidden Markov Models and how they can be used to structure a speech recognizer. [5] The University of Sheffield: Image Engineering Laboratory. Face database, I acquired the face database from this laboratory; it is free to use so long as I do not publish commercially and if I were to make a publication, let them know. I plan on sending the head of the department an explaining how I plan on using the database - [6] Valery A. Petrushin. Emotion in speech: Recognition and application to call centers. In In Engr, pages 7 10, This article discussed experiments in which people s ability to judge certain types of emotions were gauged, as well as specific aspects of the spoken word that they deemed most important to recognizing certain emotions. It was found that certain emotions were easier to recognize than others. These aspects of speech that were found to be important were used to train neural networks. The article also talked about applications to a call center in which a caller s emotional state could be classified. [7] Jef Posnaker. pgm, This is where I learned about the structure of PGM files and how I could acquire data on the greyscale of individual pixels, leading to more calculation potentials and a hands on introduction to basic image analysis. - [8] Bjoern Schuller, Gerhard Rigoll, and Manfred Lang. Hidden markov model-based speech emotion recogntion. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 2, pages II:1 II:4,

5 The article was an interesting read on another method of training recognizers via hidden markov models. Like other experiments, the training data and recognizers worked with a set of predefined emotions, and used certain aspects of speech to train the system. I m a little confused by all of the statistics jargon; I m no stranger to statistics but I could use a refresher. [9] Thurid Vogt, Elisabeth Andre, and Nikolaus Bee. Emovoice - a framework for online recognition of emotions from voice. In Perception in Multimodal Dialogue Systems - 4th IEEE Tutorial and Research Workshop on Perception and Interactive Technologies for Speech-Based Systems, volume 5078, This paper introduces an online emotion recognition system called EmoVoice. The article describes how the system works, and shows several examples of EmoVoice implemented in other applications. There is a strong possibility that my thesis will be some sort of application or system (robot, perhaps) that uses EmoVoice for emotional recognition 5

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

Audiovisual to Sign Language Translator

Audiovisual to Sign Language Translator Technical Disclosure Commons Defensive Publications Series July 17, 2018 Audiovisual to Sign Language Translator Manikandan Gopalakrishnan Follow this and additional works at: https://www.tdcommons.org/dpubs_series

More information

Speech as HCI. HCI Lecture 11. Human Communication by Speech. Speech as HCI(cont. 2) Guest lecture: Speech Interfaces

Speech as HCI. HCI Lecture 11. Human Communication by Speech. Speech as HCI(cont. 2) Guest lecture: Speech Interfaces HCI Lecture 11 Guest lecture: Speech Interfaces Hiroshi Shimodaira Institute for Communicating and Collaborative Systems (ICCS) Centre for Speech Technology Research (CSTR) http://www.cstr.ed.ac.uk Thanks

More information

easy read Your rights under THE accessible InformatioN STandard

easy read Your rights under THE accessible InformatioN STandard easy read Your rights under THE accessible InformatioN STandard Your Rights Under The Accessible Information Standard 2 Introduction In June 2015 NHS introduced the Accessible Information Standard (AIS)

More information

easy read Your rights under THE accessible InformatioN STandard

easy read Your rights under THE accessible InformatioN STandard easy read Your rights under THE accessible InformatioN STandard Your Rights Under The Accessible Information Standard 2 1 Introduction In July 2015, NHS England published the Accessible Information Standard

More information

Open Research Online The Open University s repository of research publications and other research outputs

Open Research Online The Open University s repository of research publications and other research outputs Open Research Online The Open University s repository of research publications and other research outputs Toward Emotionally Accessible Massive Open Online Courses (MOOCs) Conference or Workshop Item How

More information

Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain

Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain F. Archetti 1,2, G. Arosio 1, E. Fersini 1, E. Messina 1 1 DISCO, Università degli Studi di Milano-Bicocca, Viale Sarca,

More information

Assistive Technology for Regular Curriculum for Hearing Impaired

Assistive Technology for Regular Curriculum for Hearing Impaired Assistive Technology for Regular Curriculum for Hearing Impaired Assistive Listening Devices Assistive listening devices can be utilized by individuals or large groups of people and can typically be accessed

More information

Enhancing Telephone Communication in the Dental Office

Enhancing Telephone Communication in the Dental Office Volume 1 Number 1 Fall Issue, 1999 Enhancing Telephone Communication in the Dental Office Abstract Answering the telephone is an important first contact with the dental office. The voice tone, inflection

More information

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1 1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present

More information

Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face

Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face Yasunari Yoshitomi 1, Sung-Ill Kim 2, Takako Kawano 3 and Tetsuro Kitazoe 1 1:Department of

More information

Real Time Sign Language Processing System

Real Time Sign Language Processing System Real Time Sign Language Processing System Dibyabiva Seth (&), Anindita Ghosh, Ariruna Dasgupta, and Asoke Nath Department of Computer Science, St. Xavier s College (Autonomous), Kolkata, India meetdseth@gmail.com,

More information

Speech Group, Media Laboratory

Speech Group, Media Laboratory Speech Group, Media Laboratory The Speech Research Group of the M.I.T. Media Laboratory is concerned with understanding human speech communication and building systems capable of emulating conversational

More information

THE VOICE DOES MATTER

THE VOICE DOES MATTER THE VOICE DOES MATTER A Brief Overview of How to Choose a Voice for Your IVR Smart Action Company, LLC www.smartaction.com/guide sales@smartaction.com 310.776.9200 75 % Perhaps you think this task is trivial,

More information

Communication (Journal)

Communication (Journal) Chapter 2 Communication (Journal) How often have you thought you explained something well only to discover that your friend did not understand? What silly conversational mistakes have caused some serious

More information

Assistive Technologies

Assistive Technologies Revista Informatica Economică nr. 2(46)/2008 135 Assistive Technologies Ion SMEUREANU, Narcisa ISĂILĂ Academy of Economic Studies, Bucharest smeurean@ase.ro, isaila_narcisa@yahoo.com A special place into

More information

A Smart Texting System For Android Mobile Users

A Smart Texting System For Android Mobile Users A Smart Texting System For Android Mobile Users Pawan D. Mishra Harshwardhan N. Deshpande Navneet A. Agrawal Final year I.T Final year I.T J.D.I.E.T Yavatmal. J.D.I.E.T Yavatmal. Final year I.T J.D.I.E.T

More information

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING 134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty

More information

Source and Description Category of Practice Level of CI User How to Use Additional Information. Intermediate- Advanced. Beginner- Advanced

Source and Description Category of Practice Level of CI User How to Use Additional Information. Intermediate- Advanced. Beginner- Advanced Source and Description Category of Practice Level of CI User How to Use Additional Information Randall s ESL Lab: http://www.esllab.com/ Provide practice in listening and comprehending dialogue. Comprehension

More information

Gender Based Emotion Recognition using Speech Signals: A Review

Gender Based Emotion Recognition using Speech Signals: A Review 50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department

More information

Communication. Jess Walsh

Communication. Jess Walsh Communication Jess Walsh Introduction. Douglas Bank is a home for young adults with severe learning disabilities. Good communication is important for the service users because it s easy to understand the

More information

AVR Based Gesture Vocalizer Using Speech Synthesizer IC

AVR Based Gesture Vocalizer Using Speech Synthesizer IC AVR Based Gesture Vocalizer Using Speech Synthesizer IC Mr.M.V.N.R.P.kumar 1, Mr.Ashutosh Kumar 2, Ms. S.B.Arawandekar 3, Mr.A. A. Bhosale 4, Mr. R. L. Bhosale 5 Dept. Of E&TC, L.N.B.C.I.E.T. Raigaon,

More information

INTRODUCTION. Just because you know what you re talking about doesn t mean that I do

INTRODUCTION. Just because you know what you re talking about doesn t mean that I do INTRODUCTION Just because you know what you re talking about doesn t mean that I do Participant in Monaghan Age Friendly strategy consultation Monaghan Age Friendly alliance is in existence since September

More information

Analysis of Speech Recognition Techniques for use in a Non-Speech Sound Recognition System

Analysis of Speech Recognition Techniques for use in a Non-Speech Sound Recognition System Analysis of Recognition Techniques for use in a Sound Recognition System Michael Cowling, Member, IEEE and Renate Sitte, Member, IEEE Griffith University Faculty of Engineering & Information Technology

More information

WHAT IS SOFT SKILLS:

WHAT IS SOFT SKILLS: WHAT IS SOFT SKILLS: Soft skills refer to a cluster of personality traits, social graces, facility with language, friendliness and optimism that mark people to a varying degree. Soft skills compliment

More information

The power to connect us ALL.

The power to connect us ALL. Provided by Hamilton Relay www.ca-relay.com The power to connect us ALL. www.ddtp.org 17E Table of Contents What Is California Relay Service?...1 How Does a Relay Call Work?.... 2 Making the Most of Your

More information

SCRIPTING AND SOCIAL STORIES Holly Ricker, MA, CSW, CSP School Social Worker, School Psychologist Presenting

SCRIPTING AND SOCIAL STORIES Holly Ricker, MA, CSW, CSP School Social Worker, School Psychologist Presenting SCRIPTING AND SOCIAL STORIES Holly Ricker, MA, CSW, CSP School Social Worker, School Psychologist Presenting DEFINITIONS Social Learning We learn Social Skills, starting very early in life through observation

More information

Emote to Win: Affective Interactions with a Computer Game Agent

Emote to Win: Affective Interactions with a Computer Game Agent Emote to Win: Affective Interactions with a Computer Game Agent Jonghwa Kim, Nikolaus Bee, Johannes Wagner and Elisabeth André Multimedia Concepts and Application, Faculty for Applied Computer Science

More information

MyDispense OTC exercise Guide

MyDispense OTC exercise Guide MyDispense OTC exercise Guide Version 5.0 Page 1 of 23 Page 2 of 23 Table of Contents What is MyDispense?... 4 Who is this guide for?... 4 How should I use this guide?... 4 OTC exercises explained... 4

More information

Interact-AS. Use handwriting, typing and/or speech input. The most recently spoken phrase is shown in the top box

Interact-AS. Use handwriting, typing and/or speech input. The most recently spoken phrase is shown in the top box Interact-AS One of the Many Communications Products from Auditory Sciences Use handwriting, typing and/or speech input The most recently spoken phrase is shown in the top box Use the Control Box to Turn

More information

Performance of Gaussian Mixture Models as a Classifier for Pathological Voice

Performance of Gaussian Mixture Models as a Classifier for Pathological Voice PAGE 65 Performance of Gaussian Mixture Models as a Classifier for Pathological Voice Jianglin Wang, Cheolwoo Jo SASPL, School of Mechatronics Changwon ational University Changwon, Gyeongnam 64-773, Republic

More information

Recognition of sign language gestures using neural networks

Recognition of sign language gestures using neural networks Recognition of sign language gestures using neural s Peter Vamplew Department of Computer Science, University of Tasmania GPO Box 252C, Hobart, Tasmania 7001, Australia vamplew@cs.utas.edu.au ABSTRACT

More information

Elements of Communication

Elements of Communication Elements of Communication Elements of Communication 6 Elements of Communication 1. Verbal messages 2. Nonverbal messages 3. Perception 4. Channel 5. Feedback 6. Context Elements of Communication 1. Verbal

More information

Noise-Robust Speech Recognition Technologies in Mobile Environments

Noise-Robust Speech Recognition Technologies in Mobile Environments Noise-Robust Speech Recognition echnologies in Mobile Environments Mobile environments are highly influenced by ambient noise, which may cause a significant deterioration of speech recognition performance.

More information

Psychology Formative Assessment #2 Answer Key

Psychology Formative Assessment #2 Answer Key Psychology Formative Assessment #2 Answer Key 1) C 2) B 3) B 4) C 5) D AP Objective: Discuss the influence of drugs on neurotransmitters 6) E AP Objective: Discuss the influence of drugs on neurotransmitters

More information

Consonant Perception test

Consonant Perception test Consonant Perception test Introduction The Vowel-Consonant-Vowel (VCV) test is used in clinics to evaluate how well a listener can recognize consonants under different conditions (e.g. with and without

More information

Debsubhra Chakraborty Institute for Media Innovation, Interdisciplinary Graduate School Nanyang Technological University, Singapore

Debsubhra Chakraborty Institute for Media Innovation, Interdisciplinary Graduate School Nanyang Technological University, Singapore Debsubhra Chakraborty Institute for Media Innovation, Interdisciplinary Graduate School Nanyang Technological University, Singapore 20 th November 2018 Introduction Design of Experiment System Overview

More information

Sign Language Interpretation Using Pseudo Glove

Sign Language Interpretation Using Pseudo Glove Sign Language Interpretation Using Pseudo Glove Mukul Singh Kushwah, Manish Sharma, Kunal Jain and Anish Chopra Abstract The research work presented in this paper explores the ways in which, people who

More information

The ipad and Mobile Devices: Useful Tools for Individuals with Autism

The ipad and Mobile Devices: Useful Tools for Individuals with Autism The ipad and Mobile Devices: Useful Tools for Individuals with Autism Leslie Mullette, OTR/L, ATP Clinical Coordinator / MonTECH MAR Conference October 25, 2012 Purpose of AT Enhance overall understanding

More information

Chapter 7. M.G.Rajanandh, Department of Pharmacy Practice, SRM College of Pharmacy, SRM University.

Chapter 7. M.G.Rajanandh, Department of Pharmacy Practice, SRM College of Pharmacy, SRM University. Chapter 7 M.G.Rajanandh, Department of Pharmacy Practice, SRM College of Pharmacy, SRM University. Patient counseling is a broad term which describes the process through which health care professionals

More information

I. Language and Communication Needs

I. Language and Communication Needs Child s Name Date Additional local program information The primary purpose of the Early Intervention Communication Plan is to promote discussion among all members of the Individualized Family Service Plan

More information

Unit III Verbal and Non-verbal Communication

Unit III Verbal and Non-verbal Communication (1) Unit III Verbal and Non-verbal Communication Communication by using language is called Verbal communication. Communication through other symbols without using words is called Non-verbal communication.

More information

Involving people with autism: a guide for public authorities

Involving people with autism: a guide for public authorities People with autism frequently don t receive the services and support that they need and they are usually excluded from the planning and development of services and policies. This needs to change. This

More information

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for What you re in for Speech processing schemes for cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences

More information

IT S A WONDER WE UNDERSTAND EACH OTHER AT ALL!

IT S A WONDER WE UNDERSTAND EACH OTHER AT ALL! It s a Wonder we Understand Each Other at All! Pre-Reading 1 Discuss the following questions before reading the text. 1. Do you think people from different cultures have different communication styles?

More information

Oral Health Literacy What s New, What s Hot in Communication Skills

Oral Health Literacy What s New, What s Hot in Communication Skills Oral Health Literacy What s New, What s Hot in Communication Skills Oklahoma Dental Association Annual Meeting Friday, April 25, 2014, 2:00 5:00 Robin Wright, PhD Wright Communications 2410 Thayer Street

More information

Emotion Recognition using a Cauchy Naive Bayes Classifier

Emotion Recognition using a Cauchy Naive Bayes Classifier Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method

More information

Optical Illusions 4/5. Optical Illusions 2/5. Optical Illusions 5/5 Optical Illusions 1/5. Reading. Reading. Fang Chen Spring 2004

Optical Illusions 4/5. Optical Illusions 2/5. Optical Illusions 5/5 Optical Illusions 1/5. Reading. Reading. Fang Chen Spring 2004 Optical Illusions 2/5 Optical Illusions 4/5 the Ponzo illusion the Muller Lyer illusion Optical Illusions 5/5 Optical Illusions 1/5 Mauritz Cornelis Escher Dutch 1898 1972 Graphical designer World s first

More information

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals.

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. Bandara G.M.M.B.O bhanukab@gmail.com Godawita B.M.D.T tharu9363@gmail.com Gunathilaka

More information

VITHEA: On-line word naming therapy in Portuguese for aphasic patients exploiting automatic speech recognition

VITHEA: On-line word naming therapy in Portuguese for aphasic patients exploiting automatic speech recognition VITHEA: On-line word naming therapy in Portuguese for aphasic patients exploiting automatic speech recognition Anna Pompili, Pedro Fialho and Alberto Abad L 2 F - Spoken Language Systems Lab, INESC-ID

More information

Speech to Text Wireless Converter

Speech to Text Wireless Converter Speech to Text Wireless Converter Kailas Puri 1, Vivek Ajage 2, Satyam Mali 3, Akhil Wasnik 4, Amey Naik 5 And Guided by Dr. Prof. M. S. Panse 6 1,2,3,4,5,6 Department of Electrical Engineering, Veermata

More information

Peer Support Meeting COMMUNICATION STRATEGIES

Peer Support Meeting COMMUNICATION STRATEGIES Peer Support Meeting COMMUNICATION STRATEGIES Communication Think of a situation where you missed out on an opportunity because of lack of communication. What communication skills in particular could have

More information

1. The first step in creating a speech involves determining the purpose of the speech. A) True B) False

1. The first step in creating a speech involves determining the purpose of the speech. A) True B) False 1. The first step in creating a speech involves determining the purpose of the speech. 2. Audience analysis is a systematic process of getting to know your listeners relative to the topic and the speech

More information

INTELLIGENT LIP READING SYSTEM FOR HEARING AND VOCAL IMPAIRMENT

INTELLIGENT LIP READING SYSTEM FOR HEARING AND VOCAL IMPAIRMENT INTELLIGENT LIP READING SYSTEM FOR HEARING AND VOCAL IMPAIRMENT R.Nishitha 1, Dr K.Srinivasan 2, Dr V.Rukkumani 3 1 Student, 2 Professor and Head, 3 Associate Professor, Electronics and Instrumentation

More information

Prediction of Psychological Disorder using ANN

Prediction of Psychological Disorder using ANN Prediction of Psychological Disorder using ANN Purva D. Kekatpure #1, Prof. Rekha S. Sugandhi *2 # Department of Computer Engineering, M.I.T. College of Engineering Kothrud, Pune, India, University of

More information

Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1

Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1 Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1 Hearing-aids Induce Plasticity in the Auditory System: Perspectives From Three Research Designs and Personal Speculations About the

More information

A Guide to Theatre Access: Marketing for captioning

A Guide to Theatre Access: Marketing for captioning Guide A Guide to Theatre Access: Marketing for captioning Image courtesy of Stagetext. Heather Judge. CaptionCue test event at the National Theatre, 2015. Adapted from www.accessibletheatre.org.uk with

More information

Requirements for Maintaining Web Access for Hearing-Impaired Individuals

Requirements for Maintaining Web Access for Hearing-Impaired Individuals Requirements for Maintaining Web Access for Hearing-Impaired Individuals Daniel M. Berry 2003 Daniel M. Berry WSE 2001 Access for HI Requirements for Maintaining Web Access for Hearing-Impaired Individuals

More information

Inventions on expressing emotions In Graphical User Interface

Inventions on expressing emotions In Graphical User Interface From the SelectedWorks of Umakant Mishra September, 2005 Inventions on expressing emotions In Graphical User Interface Umakant Mishra Available at: https://works.bepress.com/umakant_mishra/26/ Inventions

More information

Situation Reaction Detection Using Eye Gaze And Pulse Analysis

Situation Reaction Detection Using Eye Gaze And Pulse Analysis Situation Reaction Detection Using Eye Gaze And Pulse Analysis 1 M. Indumathy, 2 Dipankar Dey, 2 S Sambath Kumar, 2 A P Pranav 1 Assistant Professor, 2 UG Scholars Dept. Of Computer science and Engineering

More information

Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information:

Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information: Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information: accessibility@cisco.com Summary Table - Voluntary Product Accessibility Template Criteria Supporting Features Remarks

More information

DOWNLOAD OR READ : THE VOICE IN SPEAKING PDF EBOOK EPUB MOBI

DOWNLOAD OR READ : THE VOICE IN SPEAKING PDF EBOOK EPUB MOBI DOWNLOAD OR READ : THE VOICE IN SPEAKING PDF EBOOK EPUB MOBI Page 1 Page 2 the voice in speaking the voice in speaking pdf the voice in speaking Voice and Speaking Skills For Dummies 1st Edition Pdf Download

More information

Speech Processing / Speech Translation Case study: Transtac Details

Speech Processing / Speech Translation Case study: Transtac Details Speech Processing 11-492/18-492 Speech Translation Case study: Transtac Details Phraselator: One Way Translation Commercial System VoxTec Rapid deployment Modules of 500ish utts Transtac: Two S2S System

More information

BFI-Based Speaker Personality Perception Using Acoustic-Prosodic Features

BFI-Based Speaker Personality Perception Using Acoustic-Prosodic Features BFI-Based Speaker Personality Perception Using Acoustic-Prosodic Features Chia-Jui Liu, Chung-Hsien Wu, Yu-Hsien Chiu* Department of Computer Science and Information Engineering, National Cheng Kung University,

More information

Making Sure People with Communication Disabilities Get the Message

Making Sure People with Communication Disabilities Get the Message Emergency Planning and Response for People with Disabilities Making Sure People with Communication Disabilities Get the Message A Checklist for Emergency Public Information Officers This document is part

More information

AUDIO-VISUAL EMOTION RECOGNITION USING AN EMOTION SPACE CONCEPT

AUDIO-VISUAL EMOTION RECOGNITION USING AN EMOTION SPACE CONCEPT 16th European Signal Processing Conference (EUSIPCO 28), Lausanne, Switzerland, August 25-29, 28, copyright by EURASIP AUDIO-VISUAL EMOTION RECOGNITION USING AN EMOTION SPACE CONCEPT Ittipan Kanluan, Michael

More information

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS Ayako KATOH*, Yasuhiro FUKUI**

More information

A Communication tool, Mobile Application Arabic & American Sign Languages (ARSL) Sign Language (ASL) as part of Teaching and Learning

A Communication tool, Mobile Application Arabic & American Sign Languages (ARSL) Sign Language (ASL) as part of Teaching and Learning A Communication tool, Mobile Application Arabic & American Sign Languages (ARSL) Sign Language (ASL) as part of Teaching and Learning Fatima Al Dhaen Ahlia University Information Technology Dep. P.O. Box

More information

EMOTION DETECTION FROM TEXT DOCUMENTS

EMOTION DETECTION FROM TEXT DOCUMENTS EMOTION DETECTION FROM TEXT DOCUMENTS Shiv Naresh Shivhare and Sri Khetwat Saritha Department of CSE and IT, Maulana Azad National Institute of Technology, Bhopal, Madhya Pradesh, India ABSTRACT Emotion

More information

ADVANCES in NATURAL and APPLIED SCIENCES

ADVANCES in NATURAL and APPLIED SCIENCES ADVANCES in NATURAL and APPLIED SCIENCES ISSN: 1995-0772 Published BYAENSI Publication EISSN: 1998-1090 http://www.aensiweb.com/anas 2017 May 11(7): pages 166-171 Open Access Journal Assistive Android

More information

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Matt Huenerfauth Raja Kushalnagar Rochester Institute of Technology DHH Auditory Issues Links Accents/Intonation Listening

More information

5 Quick Tips for Improving Your Emotional Intelligence. and Increasing Your Success in All Areas of Your Life

5 Quick Tips for Improving Your Emotional Intelligence. and Increasing Your Success in All Areas of Your Life 5 Quick Tips for Improving Your Emotional Intelligence and Increasing Your Success in All Areas of Your Life Table of Contents Self-Awareness... 3 Active Listening... 4 Self-Regulation... 5 Empathy...

More information

Psy /16 Human Communication. By Joseline

Psy /16 Human Communication. By Joseline Psy-302 11/16 Human Communication By Joseline Lateralization Left Hemisphere dominance in speech production in 95% of right handed and 70% of left handed people Left -> Timing, Sequence of events Right

More information

Communicating with Patients/Clients Who Know More Than They Can Say

Communicating with Patients/Clients Who Know More Than They Can Say Communicating with Patients/Clients Who Know More Than They Can Say An Introduction to Supported Conversation for Adults with Aphasia (SCA ) Developed by: The Aphasia Institute Provided through: the Community

More information

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Hatice Gunes and Maja Pantic Department of Computing, Imperial College London 180 Queen

More information

ITU-T. FG AVA TR Version 1.0 (10/2013) Part 3: Using audiovisual media A taxonomy of participation

ITU-T. FG AVA TR Version 1.0 (10/2013) Part 3: Using audiovisual media A taxonomy of participation International Telecommunication Union ITU-T TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU FG AVA TR Version 1.0 (10/2013) Focus Group on Audiovisual Media Accessibility Technical Report Part 3: Using

More information

Information Session. What is Dementia? People with dementia need to be understood and supported in their communities.

Information Session. What is Dementia? People with dementia need to be understood and supported in their communities. Information Session People with dementia need to be understood and supported in their communities. You can help by becoming a Dementia Friend. Visit www.actonalz.org/dementia-friends to learn more! Dementia

More information

Hand-Gesture Recognition System For Dumb And Paraplegics

Hand-Gesture Recognition System For Dumb And Paraplegics Hand-Gesture Recognition System For Dumb And Paraplegics B.Yuva Srinivas Raja #1, G.Vimala Kumari *2, K.Susmitha #3, CH.V.N.S Akhil #4, A. Sanhita #5 # Student of Electronics and Communication Department,

More information

Voluntary Product Accessibility Template (VPAT)

Voluntary Product Accessibility Template (VPAT) Avaya Vantage TM Basic for Avaya Vantage TM Voluntary Product Accessibility Template (VPAT) Avaya Vantage TM Basic is a simple communications application for the Avaya Vantage TM device, offering basic

More information

Interpreting, translation and communication policy

Interpreting, translation and communication policy Interpreting, translation and communication policy Contents Content Page Number 1.0 Policy Statement 2 2.0 Legal Considerations 2 3.0 Policy Aim 2 4.0 Policy Commitments 2-3 5.0 Overview 3-4 6.0 Translating

More information

Member 1 Member 2 Member 3 Member 4 Full Name Krithee Sirisith Pichai Sodsai Thanasunn

Member 1 Member 2 Member 3 Member 4 Full Name Krithee Sirisith Pichai Sodsai Thanasunn Microsoft Imagine Cup 2010 Thailand Software Design Round 1 Project Proposal Template PROJECT PROPOSAL DUE: 31 Jan 2010 To Submit to proposal: Register at www.imaginecup.com; select to compete in Software

More information

EDITORIAL POLICY GUIDANCE HEARING IMPAIRED AUDIENCES

EDITORIAL POLICY GUIDANCE HEARING IMPAIRED AUDIENCES EDITORIAL POLICY GUIDANCE HEARING IMPAIRED AUDIENCES (Last updated: March 2011) EDITORIAL POLICY ISSUES This guidance note should be considered in conjunction with the following Editorial Guidelines: Accountability

More information

Divide-and-Conquer based Ensemble to Spot Emotions in Speech using MFCC and Random Forest

Divide-and-Conquer based Ensemble to Spot Emotions in Speech using MFCC and Random Forest Published as conference paper in The 2nd International Integrated Conference & Concert on Convergence (2016) Divide-and-Conquer based Ensemble to Spot Emotions in Speech using MFCC and Random Forest Abdul

More information

Skill Council for Persons with Disability Expository for Speech and Hearing Impairment E004

Skill Council for Persons with Disability Expository for Speech and Hearing Impairment E004 Skill Council for Persons with Disability Expository for Speech and Hearing Impairment E004 Definition According to The Rights of Persons with Disabilities Act, 2016 Hearing Impairment defined as: (a)

More information

Dutch Multimodal Corpus for Speech Recognition

Dutch Multimodal Corpus for Speech Recognition Dutch Multimodal Corpus for Speech Recognition A.G. ChiŃu and L.J.M. Rothkrantz E-mails: {A.G.Chitu,L.J.M.Rothkrantz}@tudelft.nl Website: http://mmi.tudelft.nl Outline: Background and goal of the paper.

More information

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures. Date: 28 SEPT 2016 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s SoundStation Duo against the criteria described in Section

More information

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures. Date: 18 Nov 2013 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s C100 and CX100 family against the criteria described

More information

Phonak Wireless Communication Portfolio Product information

Phonak Wireless Communication Portfolio Product information Phonak Wireless Communication Portfolio Product information The Phonak Wireless Communications Portfolio offer great benefits in difficult listening situations and unparalleled speech understanding in

More information

draft Big Five 03/13/ HFM

draft Big Five 03/13/ HFM participant client HFM 03/13/201 This report was generated by the HFMtalentindex Online Assessment system. The data in this report are based on the answers given by the participant on one or more psychological

More information

Audio Visual Speech Synthesis and Speech Recognition for Hindi Language

Audio Visual Speech Synthesis and Speech Recognition for Hindi Language Audio Visual Speech Synthesis and Speech Recognition for Hindi Language Kaveri Kamble, Ramesh Kagalkar Department of Computer Engineering Dr. D. Y. Patil School of Engineering & technology Pune, India.

More information

Dealing with Difficult People 1

Dealing with Difficult People 1 Dealing with Difficult People 1 Dealing With People Copyright 2006 by Alan Fairweather All rights reserved. No part of this book may be reproduced in any form and by any means (including electronically,

More information

How can the Church accommodate its deaf or hearing impaired members?

How can the Church accommodate its deaf or hearing impaired members? Is YOUR church doing enough to accommodate persons who are deaf or hearing impaired? Did you know that according to the World Health Organization approximately 15% of the world s adult population is experiencing

More information

Tips for Youth Group Leaders

Tips for Youth Group Leaders OVERWHELMED Sometimes youth on the Autism Spectrum become so over-whelmed they are unable to function Most situations can be avoided by asking the youth to gauge their own comfort level Because the body

More information

Chapter 3 Self-Esteem and Mental Health

Chapter 3 Self-Esteem and Mental Health Self-Esteem and Mental Health How frequently do you engage in the following behaviors? SCORING: 1 = never 2 = occasionally 3 = most of the time 4 = all of the time 1. I praise myself when I do a good job.

More information

COMBINING CATEGORICAL AND PRIMITIVES-BASED EMOTION RECOGNITION. University of Southern California (USC), Los Angeles, CA, USA

COMBINING CATEGORICAL AND PRIMITIVES-BASED EMOTION RECOGNITION. University of Southern California (USC), Los Angeles, CA, USA COMBINING CATEGORICAL AND PRIMITIVES-BASED EMOTION RECOGNITION M. Grimm 1, E. Mower 2, K. Kroschel 1, and S. Narayanan 2 1 Institut für Nachrichtentechnik (INT), Universität Karlsruhe (TH), Karlsruhe,

More information

AUTISM. Social Communication Skills

AUTISM. Social Communication Skills AUTISM Social Communication Skills WHAT IS AUTISM? Autism is a developmental disorder that appears in the first 3 years of life, and affects the brain's normal development of social and communication skills

More information

Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction

Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction Aswathy M 1, Heera Narayanan 2, Surya Rajan 3, Uthara P M 4, Jeena Jacob 5 UG Students, Dept. of ECE, MBITS, Nellimattom,

More information

Speak Out! Sam Trychin, Ph.D. Copyright 1990, Revised Edition, Another Book in the Living With Hearing Loss series

Speak Out! Sam Trychin, Ph.D. Copyright 1990, Revised Edition, Another Book in the Living With Hearing Loss series Speak Out! By Sam Trychin, Ph.D. Another Book in the Living With Hearing Loss series Copyright 1990, Revised Edition, 2004 Table of Contents Introduction...1 Target audience for this book... 2 Background

More information

Good Communication Starts at Home

Good Communication Starts at Home Good Communication Starts at Home It is important to remember the primary and most valuable thing you can do for your deaf or hard of hearing baby at home is to communicate at every available opportunity,

More information

support support support STAND BY ENCOURAGE AFFIRM STRENGTHEN PROMOTE JOIN IN SOLIDARITY Phase 3 ASSIST of the SASA! Community Mobilization Approach

support support support STAND BY ENCOURAGE AFFIRM STRENGTHEN PROMOTE JOIN IN SOLIDARITY Phase 3 ASSIST of the SASA! Community Mobilization Approach support support support Phase 3 of the SASA! Community Mobilization Approach STAND BY STRENGTHEN ENCOURAGE PROMOTE ASSIST AFFIRM JOIN IN SOLIDARITY support_ts.indd 1 11/6/08 6:55:34 PM support Phase 3

More information