HISTOGRAM EQUALIZATION BASED FRONT-END PROCESSING FOR NOISY SPEECH RECOGNITION

Similar documents
A bio-inspired feature extraction for robust speech recognition

Speech Enhancement Using Deep Neural Network

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

Combination of Bone-Conducted Speech with Air-Conducted Speech Changing Cut-Off Frequency

Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking

CONSTRUCTING TELEPHONE ACOUSTIC MODELS FROM A HIGH-QUALITY SPEECH CORPUS

The SRI System for the NIST OpenSAD 2015 Speech Activity Detection Evaluation

HCS 7367 Speech Perception

Robust Speech Detection for Noisy Environments

Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise

Heart Murmur Recognition Based on Hidden Markov Model

Adaptation of Classification Model for Improving Speech Intelligibility in Noise

Research Article Automatic Speaker Recognition for Mobile Forensic Applications

Auditory principles in speech processing do computers need silicon ears?

SPEECH TO TEXT CONVERTER USING GAUSSIAN MIXTURE MODEL(GMM)

Acoustic Signal Processing Based on Deep Neural Networks

LATERAL INHIBITION MECHANISM IN COMPUTATIONAL AUDITORY MODEL AND IT'S APPLICATION IN ROBUST SPEECH RECOGNITION

Improving the Noise-Robustness of Mel-Frequency Cepstral Coefficients for Speech Processing

Voice Detection using Speech Energy Maximization and Silence Feature Normalization

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment

Speech recognition in noisy environments: A survey

Robust Neural Encoding of Speech in Human Auditory Cortex

Single-Channel Sound Source Localization Based on Discrimination of Acoustic Transfer Functions

SPEECH recordings taken from realistic environments typically

GENERALIZATION OF SUPERVISED LEARNING FOR BINARY MASK ESTIMATION

CHAPTER 1 INTRODUCTION

Speech Enhancement, Human Auditory system, Digital hearing aid, Noise maskers, Tone maskers, Signal to Noise ratio, Mean Square Error

I. INTRODUCTION. OMBARD EFFECT (LE), named after the French otorhino-laryngologist

Spectrograms (revisited)

Perceptual Effects of Nasal Cue Modification

COMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION RECOGNITION

EEL 6586, Project - Hearing Aids algorithms

Noise-Robust Speech Recognition Technologies in Mobile Environments

Speech Enhancement Based on Deep Neural Networks

FAST AMPLITUDE COMPRESSION IN HEARING AIDS IMPROVES AUDIBILITY BUT DEGRADES SPEECH INFORMATION TRANSMISSION

SRIRAM GANAPATHY. Indian Institute of Science, Phone: +91-(80) Bangalore, India, Fax: +91-(80)

Gender Based Emotion Recognition using Speech Signals: A Review

Speech Enhancement Based on Spectral Subtraction Involving Magnitude and Phase Components

Chapter 40 Effects of Peripheral Tuning on the Auditory Nerve s Representation of Speech Envelope and Temporal Fine Structure Cues

EMOTION DETECTION FROM VOICE BASED CLASSIFIED FRAME-ENERGY SIGNAL USING K-MEANS CLUSTERING

HEARING IS BELIEVING: BIOLOGICALLY- INSPIRED FEATURE EXTRACTION FOR ROBUST AUTOMATIC SPEECH RECOGNITION

Lecture 9: Speech Recognition: Front Ends

Some Studies on of Raaga Emotions of Singers Using Gaussian Mixture Model

Recurrent lateral inhibitory spiking networks for speech enhancement

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation

Computational Perception /785. Auditory Scene Analysis

Speaker Independent Isolated Word Speech to Text Conversion Using Auto Spectral Subtraction for Punjabi Language

The development of a modified spectral ripple test

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant

Using the Soundtrack to Classify Videos

Modulation and Top-Down Processing in Audition

Performance of Gaussian Mixture Models as a Classifier for Pathological Voice

Sound Texture Classification Using Statistics from an Auditory Model

arxiv: v1 [cs.cv] 13 Mar 2018

Perception Optimized Deep Denoising AutoEncoders for Speech Enhancement

Pattern Playback in the '90s

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Codebook driven short-term predictor parameter estimation for speech enhancement

Auditory-based Automatic Speech Recognition

A Review on Dysarthria speech disorder

Lecture 3: Perception

Efficient voice activity detection algorithms using long-term speech information

Acoustic feedback suppression in binaural hearing aids

An Interesting Property of LPCs for Sonorant Vs Fricative Discrimination

FIR filter bank design for Audiogram Matching

Price list Tools for Research & Development

Recognition & Organization of Speech and Audio

A frequency-selective feedback model of auditory efferent suppression and its implications for the recognition of speech in noise

Enhanced Feature Extraction for Speech Detection in Media Audio

Online Speaker Adaptation of an Acoustic Model using Face Recognition

Automatic Live Monitoring of Communication Quality for Normal-Hearing and Hearing-Impaired Listeners

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

Spectro-temporal response fields in the inferior colliculus of awake monkey

An active unpleasantness control system for indoor noise based on auditory masking

Extending a computational model of auditory processing towards speech intelligibility prediction

Sonic Spotlight. SmartCompress. Advancing compression technology into the future

Information Processing During Transient Responses in the Crayfish Visual System

Features Based on Auditory Physiology and Perception

Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3

Closed-loop Auditory-based Representation for Robust Speech Recognition. Chia-ying Lee

EEG reveals divergent paths for speech envelopes during selective attention

The MIT Mobile Device Speaker Verification Corpus: Data Collection and Preliminary Experiments

Discrete Signal Processing

CONTENT BASED CLINICAL DEPRESSION DETECTION IN ADOLESCENTS

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

SUPPRESSION OF MUSICAL NOISE IN ENHANCED SPEECH USING PRE-IMAGE ITERATIONS. Christina Leitner and Franz Pernkopf

Speech Compression for Noise-Corrupted Thai Dialects

HCS 7367 Speech Perception

INTEGRATING THE COCHLEA S COMPRESSIVE NONLINEARITY IN THE BAYESIAN APPROACH FOR SPEECH ENHANCEMENT

Sound, Mixtures, and Learning

Recognition & Organization of Speech and Audio

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

General Soundtrack Analysis

Performance Comparison of Speech Enhancement Algorithms Using Different Parameters

Research Article The Acoustic and Peceptual Effects of Series and Parallel Processing

Implementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition

ABSTRACT DISCRIMINATION OF SPEECH FROM NON-SPEECH BASED ON MULTISCALE SPECTRO-TEMPORAL MODULATIONS. Nima Mesgarani, Master of Science, 2005

A NOVEL METHOD FOR OBTAINING A BETTER QUALITY SPEECH SIGNAL FOR COCHLEAR IMPLANTS USING KALMAN WITH DRNL AND SSB TECHNIQUE

Transcription:

2 th May 216. Vol.87. No.2 25-216 JATIT & LLS. All rights reserved. HISTOGRAM EQUALIZATION BASED FRONT-END PROCESSING FOR NOISY SPEECH RECOGNITION 1 IBRAHIM MISSAOUI, 2 ZIED LACHIRI National Engineering School of Tunis (ENIT), Signal Image and Pattern Recognition Laboratory, University of Tunis El Manar, BP. 37 Belvédère,12 Tunis, Tunisia E-mail: 1 brahim.missaoui@enit.rnu.tn, 2 zied.lachiri@enit.rnu.tn ABSTRACT In this paper, we present Gabor extraction based on front-end processing using histogram equalization for noisy speech recognition. The proposed named as Histogram Equalization of Gabor Bark Spectrum, HeqGBS are extracted using 2-D Gabor processing followed by a histogram equalization step from spectro-temporal representation of Bark spectrum of speech signal. The histogram equalization is used as front-end processing in order to reduce and eliminate the undesired information from the used spectrum representation. The proposed HeqGBS are evaluated on recognition task of noisy isolated speech words using HMM. The obtained recognition rates confirm that the HeqGBS yield interesting results compared to those of the Gabor which are obtained from log Mel-spectrogram. Keywords: Front-end Processing, Histogram Equalization, 2-D Gabor processing, Feature extraction, Noisy Speech Recognition 1. INTRODUCTION Recognizing speech in noisy environment is the indispensable requirement for the efficiency of various speech applications. However, these applications remain less effective than human s ability to do this task, and no application that would equal this ability is developed [1][2]. Many research studies have been carried out for development of speech recognition system that would recognize speech using speech based on some classic techniques or various auditory models. The performance of the classical techniques based like the PLP [3], the MFCC [4] and the LPC [5] or the auditory model based such as GFCC feature [6] AMFB feature [7] and GcFB feature [8] degrade in noisy environment and need further improvement. Other new have integrated 2-D Gabor filters as a model of Spectro- Temporal Receptive Fields denoted STRF such as those developed in [9] and [1]. The STRF is the estimation of the activity of auditory cortex neurons [11]. The 2-D Gabor filters have applied to log Melspectrogram and PNCC spectrogram in [9], [1] and [12]. In [13], an auditory warped spectrum is processed by a bank of 2-D Gabor to generate a Gabor. The histogram equalization, which is an statistical adaptation normalization of, has approved to be an important technique to improve the robustness of many speech recognition systems [14][15][16][17]. A new feature extraction method based on histogram equalization and 2-D Gabor processing for noisy speech recognition is presented in this work. The histogram equalization is used as frontend processing of Gabor obtained from spectro-temporal representation of Bark spectrum of speech signal in order to improve their robustness performance. The 2-D Gabor processing consists to applying a set of 41 2-D Gabor filters to the spectro-temporal representation of speech. The experiments of isolated words recognition were carried out on TIMIT database [18] using HTK toolkit (HTK 3.5) [19]. The performance of the proposed based on histogram equalization is compared to those of Gabor which are obtained from log Mel-spectrogram. Our paper is organized as follows: Section 2 will describe the proposed histogram equalization based extraction method, while presenting the principle of histogram equalization technique. Then the section 3 will present the experimental recognition results. The section 4 will sum up the present paper. 198

2 th May 216. Vol.87. No.2 25-216 JATIT & LLS. All rights reserved. 2. HISTOGRAM EQUALIZATION BASED FEATURES EXTRACTION 2.1 Histogram Equalization Histogram equalization (Heq) is a new normalization operation of speech. It has been developed firstly to treat the digital image by normalizing the extracted visual such as grey-level and contrast [2]. The Heq has been successfully employed in many extraction approach to improve the performance and robustness of ASR applications [14][15][16][17]. It can be defined as a transformation of probability density function (pdf) of speech vectors (or testing) to a reference pdf (or training), i.e., this transformation equalizes the histogram by converting speech vectors histogram to reference histogram [21]. The Heq can be formulated and described as follows [22]: Let z is the set with probability density function (PDF) denoted by P(z) and cumulative density function (CDF) denoted byc z( z ). This set is transformed by a transformation function T into z h with PDF P(z h ) and CDF C ( z ) which verified the following equation: zh h C ( z ) = C ( z ) (1) zh h ref h Where C ( z ) is the reference CDF. ref h The transformation function T is defined as: z = = = (2) 1 1 h T( z) C z (C (z)) C (C (z)) h z ref z. Speech signal Power Spectrum Bark-scale filterbank Equal-loudness pre-emphasis Intensity-loundness conversion Histogram Equalization Post-processing 2D Gabor filters The HeqGBS Figure 1: The Diagram Block Of The Histogram Equalization Based Proposed Extraction Method 2.2 The proposed HeqGBS The proposed Histogram Equalization of Gabor Bark Spectrum (HeqGBS ) is obtained using Histogram Equalization based proposed extraction method which can be summarized as a block diagram illustrated in figure 1. The input isolated words are firstly treated by a windowing operation, followed by a Discrete Fourier Transform of each frame. The corresponding power spectrum is obtained by calculating the square of the outputs. A filterbank which is designed to simulate the Bark-scale frequencies is applied to the power spectrum. The Bark-scale frequencies are given as [3]: 5 f Bark( f) 13 arctan(76*1 f) 3.5 arctan ( ) 2 = + 7 (3) The output Bark spectrum is pre-emphasized and then compressed by applying an equal loudness preemphasis function, followed by intensity loudness conversion. These two steps are employed to approximate and reproduce human sensitivity characteristic at various frequencies and the power law characteristic of hearing [3]. Then, 2-D Gabor processing is performed to the obtained representation of isolated speech word. The processing is done using 41 Gabor filters [9]., b n, k,which Each one is a product of v( n k ) and ( ) are defined as [9][1][13]: (, ) exp( ωn( ) ωk( ) ) v n k = i n n + i k k (4) ( ) 2π( k k ) 2π n n b( n, k) =.5 cos cos Wn + 1 Wk + 1 (5) 199

2 th May 216. Vol.87. No.2 25-216 JATIT & LLS. All rights reserved. Where v( n, k) is a complex sinusoid and b( n, k) is Hanning envelope with the values of window lengths arewnandw k. The periodicity of v( n, k ) is definite by two termsωnandω k which represent the radian frequencies. These two terms allow to v( n, k ) to be tuned to various extent and orientation including diagonal one of spectro-temporal modulation. Finally, the histogram equalization of the obtained Gabor is done in order to improving the performance and robustness of these, yielding the proposed named as Histogram Equalization of Gabor Bark Spectrum, HeqGBS. 3. EXPERIMENT The performance of the HeqGBS obtained by the proposed extraction method is compared to the performance of Gabor proposed by M.R. Schädler in [9]. These Gabor are obtained by filtering the log Mel-spectrogram with 41 Gabor filters. The HeqGBS are evaluated on a set of isolated words speech of TIMIT corpus [18]. 924 words and 3294 words of this set are used respectively for the learning step and the recognition step. These words have been spoken by a set of speakers representing various dialects of the United States. The noisy speech words used in our experiment are obtained by contaminated the clean one with three additive noises of AURORA corpus [23] at seven SNR levels. The sampling frequency of these words is 16 khz. The used noises are «Car noise», «street noise» and «exhibition noise». The seven levels of SNR are -3 db, db, 3 db, 6 db, 9 db, 12 db and 15 db. The temporal representations of the isolated speech word All, the Car noise and the corresponding noisy isolated word and theirs spectrograms are illustrated in Figure 2. The HMM recognizer is used as recognition system of speech words and have been implemented by HTK toolkit (HTK 3.5) [19]. It uses a left-to-right model of HMM with 5 states and 8 multivariate Gaussian mixtures characterized by full covariance matrix (HMM_8_GM ) [24]. The recognition rates of HeqGBS and Gabor [9] for the three noises «Car noise», «street noise» and «exhibition noise» are reported in Tables 1, 2 and 3. As shown in the three tables, we can observe that the HeqGBS yield the best recognition results compared to those of Gabor in almost values of SNR levels of the three used noises, especially for a range of SNR equals to db, 3 db, 6 db and 9 db..1 Isolated Word 1 Isolated word ALL.5 -.5 Frequency.5 -.1 5 1 15 2 25 3.4 Car noise 1 2 4 6 8 1 12 14 Time Car noise.2 -.2 Frequency.5 -.4 5 1 15 2 25 3.1 Noisy isolated word 1.5 1 1.5 2 2.5 3 Time x 1 4 Noisy isolated word.5 -.5 Frequency.5 -.1 5 1 15 2 25 3 2 4 6 8 1 12 14 Time Figure 2 : The temporal representations of the isolated word All, Car noise and the corresponding noisy isolated word and theirs spectrograms. 2

2 th May 216. Vol.87. No.2 25-216 JATIT & LLS. All rights reserved. For example, in the case of exhibition noise at SNR value equals to db, the recognition rate of HeqGBS is higher than that of Gabor by 2.36. However, a slightly improvement is obtained in the three cases of noises at higher levels of SNR. For example, the HeqGBS and Gabor yields 96.5 and 92.82 respectively. Table 1: Comparison of the Obtained Recognition Rates of the HeqGBS and Gabor in Car Noise Case noise car SNR Recognition rate using HMM_8_GM Gabor HeqGBS -3 db 15.77 27.66 db 25.21 55.49 3 db 39.91 78.23 6 db 57.87 89.1 9 db 73.79 93.53 12 db 86.3 95.11 15 db 92.82 96.5 Table 2: Comparison of the Obtained Recognition Rates of the HeqGBS and Gabor in Exhibition Noise Case. noise exhibition SNR Recognition rate using HMM_8_GM Gabor HeqGBS -3 db 23.53 34.3 db 37.75 58.11 3 db 58.57 76.81 6 db 76.65 88.43 9 db 87.12 93.11 12 db 92.9 95.14 15 db 94.89 96.8 Table 3: Comparison of the Obtained Recognition Rates of the HeqGBS and Gabor in Street Noise Case noise street SNR 4. CONCLUSION Recognition rate using HMM_8_GM Gabor HeqGBS -3 db 11.96 28.81 db 25.27 59.53 3 db 48.22 79.96 6 db 67.55 89.59 9 db 8.61 93.53 12 db 89.41 95.8 15 db 94.4 95.99 We presented extraction based on 2-D Gabor processing and histogram equalization for speech recognition. The histogram equalization is employed as front-end post processing for reducing the undesired information of isolated speech words. The proposed HeqGBS is evaluated using TIMIT database and recognition system based on HMM. Comparing the recognition performance of HeqGBS obtained using our method to those of Gabor obtained from log Melspectrogram, it is shown that HeqGBS yield the best recognition performance. REFRENCES: [1] J. Barker, E. Vincent, N. Ma, H. Christensen, and P. Green, The PASCAL CHiME speech separation and recognition challenge, Computer Speech & Language. Vol. 27, No. 3, 213, pp. 621-633. [2] D. Yu, L. Deng, Automatic Speech Recognition: A Deep Learning Approach, Springer, 215. [3] H. Hermansky, Perceptual linear predictive (PLP) analysis of speech, The Journal of the Acoustical Society of America, Vol. 87, No. 4, 199, pp. 1738-1752. 21

2 th May 216. Vol.87. No.2 25-216 JATIT & LLS. All rights reserved. [4] S.B. Davis, and P. Mermelstein, Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences, IEEE Transactions on Acoustics Speech and Signal Processing, Vol. 28, No. 4, 198, pp. 357-366. [5] D. O'Shaughnessy, Linear predictive coding, IEEE Potentials, Vol. 7, No.1, 1988, pp. 29-32 [6] J. Qi, D. Wang, Y. Jiang, and R. Liu, Auditory based on gammatone filters for robust speech recognition. IEEE International Symposium on Circuits and Systems (ISCAS), May 19-23, 213, pp. 19-23. [7] N. Moritz, An Auditory Inspired Amplitude Modulation Filter Bank for Robust Feature Extraction in Automatic Speech Recognition Audio, IEEE/ACM Transactions on Speech, and Language Processing, Vol. 23, No. 11, 215, pp. 1926-1937. [8] Y. Zouhir and K. Ouni, A bio-inspired feature extraction for robust speech recognition, SpringerPlus, Vol. 3, 214, pp.651. [9] M.R. Schädler, B.T. Meyer, and B. Kollmeier, Spectro temporal modulation subspacespanning filter bank for robust automatic speech recognition, Journal of the Acoustical Society of America, Vol. 131, No. 5, 212, pp. 4134-4151. [1] M.R. Schädler and B. Kollmeier, Separable spectro-temporal Gabor filter bank : Reducing the complexity of robust for automatic speech recognition. Journal of the Acoustical Society of America, Vol. 137, No. 4, 215, pp. 247-259. [11] S. Shamma, Spectro-Temporal Receptive Fields, Encyclopedia of Computational Neuroscience, Springer, 214, pp 1-6. [12] B.T. Meyer, C. Spille, B. Kollmeier, and N. Morgan, Hooking up spectro temporal filters with auditory-inspired representations for robust automatic speech recognition, Proceedings of Annual Conference of the International Speech Communication Association INTERSPEECH, September 9-13, 212, pp. 1259-1262. [13] I. Missaoui and Z. Lachiri, Gabor Filterbank Features for Robust Speech Recognition. The International Conference on Image and Signal Processing ICISP, LNCS 859, Springer, 214, pp. 665-671. [14] V. Joshi, R. Bilgi, S. Umesh, L. García, and M. C. Benítez, Sub-band based histogram equalization in cepstral domain for speech recognition. Speech Communication, Vol. 69, 215, pp. 46-65. [15] H.J. Hsieh, B. Chen, J.w. Hung, Histogram equalization of contextual statistics of speech for robust speech recognition, Multimedia Tools and Applications, Vol. 74, No. 17, 215, pp. 6769-6795. [16] R. Al-Wakeel, M. Shoman, M. Aboul- Ela, and S. Abdou, Stereo-based histogram equalization for robust speech recognition, EURASIP Journal on Audio, Speech, and Music Processing, Vol. 15, pp.1-1. [17] M.R. Schädler and B. Kollmeier, Normalization of spectro-temporal Gabor filter bank for improved robust automatic speech recognition systems, Proceedings of Annual Conference of the International Speech Communication Association INTERSPEECH, September 9-13, 212. [18] J.S. Garofolo, L.F. Lamel, W.M. Fisher, J.G. Fiscus, and D.S. Pallett, TIMIT acousticphonetic continous speech corpus CD-ROM, NASA STI/Recon Technical Report N 93, 2743, 1993. [19] S. Young, G. Evermann, D. Kershaw, G. Moore, J. Odell, D. Ollason, V. Valtchev, and P. Woodland, The HTK book (HTK version 3.5), Cambridge University Engineering Department, 215. [2] T. Acharya, and A. K. Ray, Image processing: principles and applications. Wiley-Interscience, 25. [21] S. Molau, M. Pitz, and H. Ney, Histogram- Based Normalization in the Acoustic feature space, IEEE Workshop on Automatic Speech Recognition and Understanding, 21, pp. 21-24. [22] A. Torre, A.M. Peinado, J.C. Segura, J.L. Perez-Cordoba, M.C. Bentez, and A.J. Rubio, Histogram equalization of speech 22

2 th May 216. Vol.87. No.2 25-216 JATIT & LLS. All rights reserved. representation for robust speech recognition, IEEE Transactions on Speech and Audio Processing, Vol. 13, No. 3, 25, pp. 355-366. [23] H. Hirsch, and D. Pearce, The Aurora Experimental Framework for the Performance Evaluation of Speech Recognition Systems Under Noisy Conditions, Proceedings of the ISCA Workshop on Automatic Speech Recognition: Challenges for the New Millennium, September 18-2, 2, pp. 181-188. [24] Y. Ephraim, and N. Merhav, Hidden markov processes, IEEE Transactions on Information Theory, Vol. 48, No. 6, 22, pp. 1518-1569. 23