Acoustic-Labial Speaker Verication. (luettin, genoud,

Similar documents
Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Facial expression recognition with spatiotemporal local descriptors

Audio-Visual Integration in Multimodal Communication

Comparison of Lip Image Feature Extraction Methods for Improvement of Isolated Word Recognition Rate

Noise-Robust Speech Recognition Technologies in Mobile Environments

MULTIMODAL FUSION FOR CUED SPEECH LANGUAGE RECOGNITION

CONSTRUCTING TELEPHONE ACOUSTIC MODELS FROM A HIGH-QUALITY SPEECH CORPUS

Face Analysis : Identity vs. Expressions

Speech recognition in noisy environments: A survey

Lip shape and hand position fusion for automatic vowel recognition in Cued Speech for French

Online Speaker Adaptation of an Acoustic Model using Face Recognition

TESTS OF ROBUSTNESS OF GMM SPEAKER VERIFICATION IN VoIP TELEPHONY

Facial Expression Biometrics Using Tracker Displacement Features

INTELLIGENT LIP READING SYSTEM FOR HEARING AND VOCAL IMPAIRMENT

Exploiting visual information for NAM recognition

Analysis of Speech Recognition Techniques for use in a Non-Speech Sound Recognition System

MULTI-CHANNEL COMMUNICATION

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

The MIT Mobile Device Speaker Verification Corpus: Data Collection and Preliminary Experiments

General Soundtrack Analysis

Consonant Perception test

Advances in Automatic Speech Recognition: From Audio-Only To Audio-Visual Speech Recognition

Gender Based Emotion Recognition using Speech Signals: A Review

A visual concomitant of the Lombard reflex

Ambiguity in the recognition of phonetic vowels when using a bone conduction microphone

Research Proposal on Emotion Recognition

Performance of Gaussian Mixture Models as a Classifier for Pathological Voice

Sign Language to English (Slate8)

Terran Lane. aids a single user in protecting his or her account. Lee et al., 1997). The learning task for our domain

Adaptation of Classification Model for Improving Speech Intelligibility in Noise

A Lip Reading Application on MS Kinect Camera

COMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION RECOGNITION

Lecturer: T. J. Hazen. Handling variability in acoustic conditions. Computing and applying confidence scores

Single-Channel Sound Source Localization Based on Discrimination of Acoustic Transfer Functions

A Model for Automatic Diagnostic of Road Signs Saliency

The Deaf Brain. Bencie Woll Deafness Cognition and Language Research Centre

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition

in Human-Computer Interaction social interaction play a role in this response. Reciprocity is the

Bayesian Networks to Combine Intensity and Color Information in Face Recognition

Oregon Graduate Institute of Science and Technology,

SPEECH TO TEXT CONVERTER USING GAUSSIAN MIXTURE MODEL(GMM)

Speech Compression for Noise-Corrupted Thai Dialects

Emotion Recognition using a Cauchy Naive Bayes Classifier

Dutch Multimodal Corpus for Speech Recognition

Outline. Teager Energy and Modulation Features for Speech Applications. Dept. of ECE Technical Univ. of Crete

LATERAL INHIBITION MECHANISM IN COMPUTATIONAL AUDITORY MODEL AND IT'S APPLICATION IN ROBUST SPEECH RECOGNITION

Combining Biometric Evidence for Person Authentication

I. INTRODUCTION. OMBARD EFFECT (LE), named after the French otorhino-laryngologist

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS

Using Deep Convolutional Networks for Gesture Recognition in American Sign Language

Image Processing of Eye for Iris Using. Canny Edge Detection Technique

Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space

Neuro-Inspired Statistical. Rensselaer Polytechnic Institute National Science Foundation

Computational Perception /785. Auditory Scene Analysis

A FRAMEWORK FOR ACTIVITY-SPECIFIC HUMAN IDENTIFICATION

Using the Soundtrack to Classify Videos

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag

Codebook driven short-term predictor parameter estimation for speech enhancement

AUDIO-VISUAL EMOTION RECOGNITION USING AN EMOTION SPACE CONCEPT

Image Understanding and Machine Vision, Optical Society of America, June [8] H. Ney. The Use of a One-Stage Dynamic Programming Algorithm for

Skin color detection for face localization in humanmachine

Research Article Automatic Speaker Recognition for Mobile Forensic Applications

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

Robust Speech Detection for Noisy Environments

Generating Artificial EEG Signals To Reduce BCI Calibration Time

Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise

doi: / _59(

Multimodal emotion recognition from expressive faces, body gestures and speech

and errs as expected. The disadvantage of this approach is that it is time consuming, due to the fact that it is necessary to evaluate all algorithms,

An Analysis of Automatic Gender Classification

A Multilevel Fusion Approach for Audiovisual Emotion Recognition

Motion Control for Social Behaviours

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

Acoustic Sensing With Artificial Intelligence

Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception

arxiv: v1 [cs.cv] 13 Mar 2018

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation

VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS

Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and

Auditory-Visual Speech Perception Laboratory

Gesture Recognition using Marathi/Hindi Alphabet

Cancer Cells Detection using OTSU Threshold Algorithm

This is the accepted version of this article. To be published as : This is the author version published as:

2-2-2, Hikaridai, Seika-cho, Soraku-gun, Kyoto , Japan 2 Real World Computing Partnership Multimodal Functions Sharp Laboratories.

Segmentation of Color Fundus Images of the Human Retina: Detection of the Optic Disc and the Vascular Tree Using Morphological Techniques

HEARING GUIDE PREPARED FOR CLINICAL PROFESSIONALS HEARING.HEALTH.MIL. HCE_ClinicalProvider-Flip_FINAL01.indb 1

Speech as HCI. HCI Lecture 11. Human Communication by Speech. Speech as HCI(cont. 2) Guest lecture: Speech Interfaces

HCS 7367 Speech Perception

Functional Elements and Networks in fmri

Automatic Signer Diarization the Mover is the Signer Approach

Broadband Wireless Access and Applications Center (BWAC) CUA Site Planning Workshop

Review of SPRACH/Thisl meetings Cambridge UK, 1998sep03/04

ILLUSIONS AND ISSUES IN BIMODAL SPEECH PERCEPTION

Mammography is a most effective imaging modality in early breast cancer detection. The radiographs are searched for signs of abnormality by expert

In this chapter, you will learn about the requirements of Title II of the ADA for effective communication. Questions answered include:

Speechreading (Lipreading) Carol De Filippo Viet Nam Teacher Education Institute June 2010

Study on Aging Effect on Facial Expression Recognition

Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain

LUNG NODULE SEGMENTATION IN COMPUTED TOMOGRAPHY IMAGE. Hemahashiny, Ketheesan Department of Physical Science, Vavuniya Campus

Director of Testing and Disability Services Phone: (706) Fax: (706) E Mail:

Transcription:

Acoustic-Labial Speaker Verication Pierre Jourlin 1;2, Juergen Luettin 1, Dominique Genoud 1, Hubert Wassner 1 1 IDIAP, rue du Simplon 4, CP 592, CH-192 Martigny, Switzerland (luettin, genoud, wassner)@idiap.ch 2 LIA, 339 chemin des Meinajaries, BP 1228, 84911 Avignon Cedex 9, France jourlin@univ-avignon.fr Abstract. This paper describes a multimodal approach for speaker verication. The system consists of two classiers, one using visual features and the other using acoustic features. A lip tracker is used to extract visual information from the speaking face which provides shape and intensity features. We describe an approach for normalizing and mapping dierent modalities onto a common condence interval. We also describe a novel method for integrating the scores of multiple classiers. Veri- cation experiments are reported for the individual modalities and for the combined classier. The performance of the integrated system outperformed each sub-system and reduced the false acceptance rate of the acoustic sub-system from 2.3% to.5%. 1 Introduction Automatic verication of a person's identity is a dicult problem and has received considerable attention over the last decade. The ability of such a system to reject impostors, who claim a false identity, becomes a critical issue in security applications. The use of multiple modalities like face, prole, motion or speech is likely to decrease the possibility of false acceptance and to lead to higher robustness and performance [1]. Brunelli et al. [2] have previously described a bimodal approach for person identication. The system was based on visual features of the static face image and on acoustic features of the speech signal. The performance of the integrated subsystem was shown to be superior to that of each subsystem. The cognitive aspect of lip movements in speech perception has been studied extensively and the complementary nature of the visual signal has been successfully exploited in bimodal speech recognition systems [14]. The fact that temporal lip information not only contains speech information but also characteristic information about a person's identity has largely been ignored, until recently, where Luettin and al. [11] have proposed a new modality for person recognition based on spatio-temporal lip features. In this paper, we extend this approach and address the combination of the acoustic and visual speech modality for a speaker verication system. We describe the normalization and mapping of dierent modalities and the determination of a threshold for rejecting impostors. A scheme for combining the evidence of both modalities is described and we show that the performance of the multimodal system outperforms both unimodal subsystems.

2 The Database The M2VTS audio-visual database has been collected at UCL (Catholic University of Louvain) [15]. It contains 37 speakers (male and female) pronouncing in French the digits from zero to nine. One recording is a sequence of the ten digits pronounced continuously. Five recordings have been taken of each speaker, at one week intervals to account for minor face changes like beards and hairstyle. The images contain the whole head and are sampled at 25 Hz. We have divided the database into 3 sets : the rst three shots were used as training set, the 4th shot as validation set and the 5th shot as test set. The 5th shot represents the most dicult recordings to recognize. This shot diers from the others in face variation (head tilted, unshaved), voice variation (poor voice SNR), or shot imperfections (poor focus, dierent zoom factor). 3 Lip Feature Extraction We are interested in facial changes due to speech production and therefore analyse the mouth region only. Common approaches in face recognition are often based on geometric features or intensity features, either of the whole face or of parts of the face [3]. We combine both approaches, assuming that much information about the identity of a speaker is contained in the lip contours and the grey-level distribution around the mouth area. During speech production the lip contours deform and the intensities in the mouth area change due to lip deformation, protrusion and visibility of teeth and tongue. These features contain information specic to the speech articulators of a person and to the way that person speaks. We aim to extract this information during speech production and to build spatio-temporal models for a speaking person. 3.1 Lip Model Our lip model is based on active shape models [4] and has been described in detail in [1]. It is used to locate, track and parameterize the lips over an image sequence of a speaking person. Features are recovered from tracking results. They describe the shape of the inner and outer lip contours and the intensity at the mouth area. The shape features and the intensity features are both based on principal component analysis which was performed on a training set. The intensity model deforms with the lip contours and therefore represents shape independent intensity information. This is an important property of the model. We obtain detailed shape information from the shape parameters and therefore would like the intensity model to describe intensity information which is independent of the lip shape and lip movements [12]. 3.2 Lip Tracking Experiments were performed on all 5 shots of the M2VTS database. The database consists of colour images which were converted to grey-level images for our experiments. Several subjects have a beard or did not shave between dierent

recordings. We used examples from the training set to build the lip model. The model was then used to track the lips over all image sequences of all three sets. This consisted of analysing over 27 images which we believe is the largest experiment reported so far for lip tracking. It is important to evaluate the performance of the tracking algorithm and we have previously attempted to do this by visually inspecting tracking results [1]. However this task is very laborious and subjective. Here we omit direct performance evaluation of the tracking algorithm. Instead we try to evaluate the combined performance of the feature extraction and the recognition process by evaluating the person recognition performance only. Person recognition errors might therefore be due to inaccurate tracking results or due to classication errors. Examples of lip tracking results are shown in Fig 1. 1 2 3 4 Fig. 1. Examples of lip tracking results 4 Speaker Verication 4.1 Test Protocol We use the sequences of the training set (rst 3 shots) of the 36 customers for training the speaker models. The validation set serves for computing the normalization and mapping function for the rejection threshold and the test set is used for the verication tests. Subject 37 is only used as impostor, claiming the identity of all 36 customers. Each customer is also used as an impostor of the 35 other customers. The verication mode is text-dependent and based on the whole sequence of ten digits. For the verication task, we make use of a world model, which represents the average model of a large number of subjects (5 speakers for the acoustic model and 36 for the labial one). For each digit we compute the corresponding customer likelihood and the world likelihood. We can so obtain a customer and a world likelihood for all speech data. The dierence between the ratio of the

two scores and the threshold is then mapped to the interval [; 1] using a sigmod function [6]. If a nal score is equal to :5, no decision is made; if it is inferior to :5, the speaker is rejected else he is accepted. Several methods have been proposed in order to nd an a priori decision threshold, according to various criteria, e.g. Equal Error Rate and Furui method [5]. Due to the small amount of speech data for each speaker we calculated a customer independent threshold, based on a dichotomic method. The use of this method implies that the function of verication errors is convex. This function is computed on the validation set and the value, for which the number of false acceptance and false rejection errors is minimum, is used as threshold value. 4.2 Acoustic Speaker Verication Since the word sequences are known in our experiments, we use a HMM based speech recognition system to segment the sentences into digits. The recognizer uses the known sequence of digit word models, which were trained on the Polyphone database of IDIAP [9], to nd the word boundaries. Each digit HMM has been trained with 11 to 2 examples of 835 speakers. The segmentation is performed on all three sets. The segmented training set is used to train one model for each digit and speaker. These models are called customer models. The acoustic parameters are Linear Prediction Cepstral Coef- cients with rst and second order derivatives. Each vector has 39 components. We used left-right HMMs with between 2 and 7 emitting states, depending on the digit length. Each state is modelled with one single Gaussian mixture with diagonal covariance matrix. The same conguration is used for the world model. The world model is trained on the Polyphone database using 3 examples from 5 speakers for each digit. When an access test is performed, the speech is rst segmented into digits. The test protocol described above is applied, where the customer and world likelihoods are obtained by the product of all digit likelihoods, using the customer and world models, respectively. The mapping function, obtained from the validation set, is used in the test set to map the score into the condence interval. On the test set, we obtain a false acceptance rate of 2.3% and a false rejection rate of 2.8%. The identication rate of the 36 speakers was 97.2% (see Fig 2 and Table 1). However, it is well worth noting that only 36 tests were conduced for identication and false rejection but 1332 (36 37) tests for false acceptance. 4.3 Labial Speaker Verication For segmenting the labial data we use the previous acoustic segmentation. Lip features can improve speech recognition results [8], but they do not provide enough information to segment speech into phonemic units. Lip movements may be useful in addition to the acoustic signal for segmenting speech, especially in a noisy acoustic environment [7, 13]. We did not use visual information for segmentation since our acoustic models were trained on a very large database and are therefore more reliable than our labial models. We used the same scoring

% errors 1 8 6 false rejection 4 false acceptance 2-5 threshold 5 1 Fig. 2. Acoustic verication results (validation set) method for labial verication as for acoustic verication, except the world model, which was trained on the 36 customers from the M2VTS database. Labial data has a four times lower sampling frequency than acoustic data. The number of emitting states was therefore chosen to be 1 or 2, depending on the digit length. The parameter vectors consisted of 25 components: 14 shape and 1 intensity parameters and the scale. The same test protocol, which was used for acoustic experiments, was now used for labial verication. On the test data set, we obtained a false acceptance rate of 3:% and a false rejection rate of 27:8%. The identication rate was 72:2% (see Fig 3 and Table 1). % errors 1 9 8 false rejection 7 6 5 false acceptance 4 3 2 1-15 -1-5 Fig. 3. Labial verication results (validation set)

4.4 Acoustic-Labial Verication The acoustic-labial score is computed as the weighted sum of the acoustic and the labial scores. Both scores have been normalized as described in the previous sections. The process uses individual threshold values for each modality and maps the scores into a common condence interval. The normalization process is a critical point in the design of an integration scheme and is necessary to ensure that dierent modalities are mapped into the same interval and share a common threshold value. The dierent modalities are now normalized but they provide dierent levels of condence. We therefore need to weight the contribution of each modality according to their condence level. The weight is for the acoustic score and 1? for the labial one. The same dichotomic algorithm, used to compute the thresholds, is now used to nd the optimal weight. The function of verication results on the validation data is used for the dichotomic search, for the same reasons as described for threshold search. The following results were obtained on the test set : using a weight of :86, we obtain a false acceptance rate of :5%, a false rejection rate of 2:8% and a correct identication rate of 1:%. The absolute gain is a 1:8% reduction in cumulated verication errors (FA+FR) and an increase of 2:8% in the identication rate (ID). Fig 4 shows the eects of weighting on acoustic-labial results, when the acceptance threshold is optimaly xed for each modality. Table 1 sums up the results. % errors 18 16 14 false identification 12 1 8 6 false rejection 4 2 false acceptance..2.4.6.8 1. Labial scores WEIGHTS Acoustic scores Fig. 4. Results with dierent weights (validation set) We have followed a data-driven approach for fusion, where data fusion is present at dierent levels. At the rst stage, learning and decoding of the labial models use the segmentation obtained from the acoustic models. The rst score normalization is performed by normalizing the scores with respect to a world model for each modality. The nal normalization is obtained by nding an opti-

Table 1. Results on validation and test set Validation Test Type of score ID FA FR ID FA FR Acoustic 1. 2.5. 97.2 2.3 2.8 Labial 82.3 4.9 8.8 72.2 3. 27.8 Bimodal 1..6. 1..5 2.8 Number of tests 36 1332 36 36 1332 36 ID : Correct identication, FA : False acceptance, FR : False rejection. mal mapping in the interval [; 1] for each modality. At this stage, the two scores are normalized, but we know that each modality has dierent levels of reliability. So, the last level of the fusion process is to nd the optimal weight for the two sources of information. 5 Conclusion The bimodal speech processing is a very new domain. The speech part of the M2VTS database seems to be quite small compared with other acoustic databases, but it is maybe the largest existing database for audio-visual speaker verication. Moreover, lip feature extraction is quite new in the computer vision eld and is well known to be a dicult problem. Despite of these dicult conditions, the results we have obtained are very promising. The number of tests is quite small compared to other acoustic speaker verication experiments. However, the reduction of the false acceptance rate for the multimodal system suggests that acoustic and labial information are complementary and that the supplemental use of lip information could improve the performance of an acoustic based speaker verication system. Acknowledgments This work has been performed within the framework of the M2VTS (Multi Modal Verication for Teleservices and Security applications) project, granted by the ACTS program. References 1. M. Acheroy, C. Beumier, J. Bigun, G. Chollet, B. Duc, S. Fischer, D. Genoud, P. Lockwood, G. Maitre, S. Pigeon, I. Pitas, K. Sobottka and L. Vandendorpe (1996) Multi-Modal Person Verication Tools using Speech and Images Proceedings of the European Conference on Multimedia Applications, Services and Techniques, Louvain-la-neuve, 747-761.

2. R. Brunelli and D. Falavigna (1995) Person Identication Using Multiple Cues IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 17, no. 1, 955-966. 3. R. Chellappa, C. L. Wilson and S. Sirohey (1995) Human and Machine Recognition of Faces: A Survey Proceedings IEEE, vol. 83, no. 5, 75-74. 4. T. F. Cootes, A. Hill, C. J. Talor and J. Haslam (1994) Use of active shape models for locating structures in medical images Image and Vision Computing, vol. 12, no. 6, 355-365. 5. S. Furui (1994) An Overview of speaker recognition technology Proceedings of the ESCA Workshop on Automatic Speaker Recognition Identication Verication, Martigny, 1-9. 6. D. Genoud, Frederic Bimbot, G. Gravier, G. Chollet (1996) Combining Methods to Improve Speaker Verication Decision Proceedings of the International Conference on Speech and Language Processing, Philadelphia. 7. P. Jourlin, Marc El-Beze and Henri Meloni (1995) Bimodal Speech Recognition Proceedings of the International Workshop on Automatic Face and Gesture Recognition, Zurich, 32-325. 8. P. Jourlin (1996) Handling Disynchronization Phenomena with HMM in Connected Speech Proceedings of European Signal Processing Conference, Trieste, 1:133-136. 9. G.Chollet, J.L. Cochard, A. Constantinescu and P. Langlais (1995) Swiss French Polyphone and Polyvar : Telephone Speech Databases to Study Intra and Inter Speaker Variability Technical Report, IDIAP, Martigny. 1. J. Luettin, N. A. Thacker and S. W. Beet (1996) Locating and Tracking Facial Speech Features Proceedings of the International Conference on Pattern Recognition, Vienna. 11. J. Luettin, Neil A. Thacker and Steve Beet (1996) Speaker Identication by Lipreading Proceedings of the International Conference on Spoken Language Processing, Philadelphia, PA, USA, vol. 1, 62-65. 12. J. Luettin and N. A. Thacker and S. W. Beet (1996) Speechreading using shape and intensity information Proceedings of the International Conference on Spoken Language Processing, Philadelphia, PA, USA, vol. 1, 58-61. 13. M.W. Mak and W.G. Allen (1994) Lip-Motion analysis for speech segmentation in noise Speech Communication, vol. 14, no. 3, 279-296. 14. E.D. Petajan (1984) Automatic Lipreading to Enhance Speech Recognition Proceedings of the Global Communications Conference, IEEE Communication Society, Atlanta, Georgia, 265-272. 15. S. Pigeon and L. Vandendorpe (1997) The M2VTS Multimodal Face Database (Release 1.) Proceedings of the First International Conference on Audio- and Video-based Biometric Person Authentication, Crans-Montana, Switzerland.

This article was processed using the LaT E X macro package with LLNCS style