Performance of Gaussian Mixture Models as a Classifier for Pathological Voice

Similar documents
Telephone Based Automatic Voice Pathology Assessment.

An Investigation of MDVP Parameters for Voice Pathology Detection on Three Different Databases

Voice Pathology Recognition and Classification using Noise Related Features

SPEECH TO TEXT CONVERTER USING GAUSSIAN MIXTURE MODEL(GMM)

Noise-Robust Speech Recognition Technologies in Mobile Environments

Performance Evaluation of Machine Learning Algorithms in the Classification of Parkinson Disease Using Voice Attributes

Gender Based Emotion Recognition using Speech Signals: A Review

Comparative Analysis of Vocal Characteristics in Speakers with Depression and High-Risk Suicide

Speech recognition in noisy environments: A survey

Combination of Bone-Conducted Speech with Air-Conducted Speech Changing Cut-Off Frequency

CONSTRUCTING TELEPHONE ACOUSTIC MODELS FROM A HIGH-QUALITY SPEECH CORPUS

EMOTIONS are one of the most essential components of

Heart Murmur Recognition Based on Hidden Markov Model

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation

A Judgment of Intoxication using Hybrid Analysis with Pitch Contour Compare (HAPCC) in Speech Signal Processing

Adaptation of Classification Model for Improving Speech Intelligibility in Noise

METHOD. The current study was aimed at investigating the prevalence and nature of voice

Biceps Activity EMG Pattern Recognition Using Neural Networks

Voice Detection using Speech Energy Maximization and Silence Feature Normalization

COMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION RECOGNITION

The SRI System for the NIST OpenSAD 2015 Speech Activity Detection Evaluation

International Journal of Pure and Applied Mathematics

Minimum Feature Selection for Epileptic Seizure Classification using Wavelet-based Feature Extraction and a Fuzzy Neural Network

Research Proposal on Emotion Recognition

Analysis of Speech Recognition Techniques for use in a Non-Speech Sound Recognition System

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

TESTS OF ROBUSTNESS OF GMM SPEAKER VERIFICATION IN VoIP TELEPHONY

A Review on Dysarthria speech disorder

Voice analysis for detecting patients with Parkinson s disease using the hybridization of the best acoustic features

INTELLIGENT LIP READING SYSTEM FOR HEARING AND VOCAL IMPAIRMENT

Biomedical. Measurement and Design ELEC4623. Lectures 15 and 16 Statistical Algorithms for Automated Signal Detection and Analysis

Genetic Algorithm based Feature Extraction for ECG Signal Classification using Neural Network

Comparison of Lip Image Feature Extraction Methods for Improvement of Isolated Word Recognition Rate

SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING

MACHINE LEARNING BASED APPROACHES FOR PREDICTION OF PARKINSON S DISEASE

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

LIE DETECTION SYSTEM USING INPUT VOICE SIGNAL K.Meena 1, K.Veena 2 (Corresponding Author: K.Veena) 1 Associate Professor, 2 Research Scholar,

Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise

SUPPRESSION OF MUSICAL NOISE IN ENHANCED SPEECH USING PRE-IMAGE ITERATIONS. Christina Leitner and Franz Pernkopf

EMOTION DETECTION FROM VOICE BASED CLASSIFIED FRAME-ENERGY SIGNAL USING K-MEANS CLUSTERING

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition

Monitoring Cardiac Stress Using Features Extracted From S1 Heart Sounds

A Sleeping Monitor for Snoring Detection

Single-Channel Sound Source Localization Based on Discrimination of Acoustic Transfer Functions

Voice Analysis in Individuals with Chronic Obstructive Pulmonary Disease

Research Article Automatic Speaker Recognition for Mobile Forensic Applications

Various Methods To Detect Respiration Rate From ECG Using LabVIEW

468 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 53, NO. 3, MARCH 2006

COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION

Neural Network based Heart Arrhythmia Detection and Classification from ECG Signal

Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face

Visi-Pitch IV is the latest version of the most widely

Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain

A Determination of Drinking According to Types of Sentence using Speech Signals

Interjudge Reliability in the Measurement of Pitch Matching. A Senior Honors Thesis

Some Studies on of Raaga Emotions of Singers Using Gaussian Mixture Model

Differences of the Voice Parameters Between the Population of Different Hearing Tresholds: Findings by Using the Multi-Dimensional Voice Program

Proceedings of Meetings on Acoustics

Classification of ECG Data for Predictive Analysis to Assist in Medical Decisions.

LATERAL INHIBITION MECHANISM IN COMPUTATIONAL AUDITORY MODEL AND IT'S APPLICATION IN ROBUST SPEECH RECOGNITION

A Novel Application of Wavelets to Real-Time Detection of R-waves

The MIT Mobile Device Speaker Verification Corpus: Data Collection and Preliminary Experiments

Acoustic Signal Processing Based on Deep Neural Networks

Ambiguity in the recognition of phonetic vowels when using a bone conduction microphone

UNOBTRUSIVE MONITORING OF SPEECH IMPAIRMENTS OF PARKINSON S DISEASE PATIENTS THROUGH MOBILE DEVICES

Wavelet Neural Network for Classification of Bundle Branch Blocks

Classification of Cardiac Arrhythmias based on Dual Tree Complex Wavelet Transform

Parkinson s Disease Diagnosis by k-nearest Neighbor Soft Computing Model using Voice Features

EXPLORATORY ANALYSIS OF SPEECH FEATURES RELATED TO DEPRESSION IN ADULTS WITH APHASIA

Speech and Sound Use in a Remote Monitoring System for Health Care

Enhanced Feature Extraction for Speech Detection in Media Audio

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment

Nonlinear signal processing and statistical machine learning techniques to remotely quantify Parkinson's disease symptom severity

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

BFI-Based Speaker Personality Perception Using Acoustic-Prosodic Features

Speech Emotion Recognition using CART algorithm

Improved Intelligent Classification Technique Based On Support Vector Machines

2012, Greenwood, L.

Image Processing of Eye for Iris Using. Canny Edge Detection Technique

Australian Journal of Basic and Applied Sciences

Online Speaker Adaptation of an Acoustic Model using Face Recognition

Primary Level Classification of Brain Tumor using PCA and PNN

Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking

Genesis of wearable DSP structures for selective speech enhancement and replacement to compensate severe hearing deficits

Lecturer: T. J. Hazen. Handling variability in acoustic conditions. Computing and applying confidence scores

HCS 7367 Speech Perception

Sound Texture Classification Using Statistics from an Auditory Model

Efficacy of the Extended Principal Orthogonal Decomposition Method on DNA Microarray Data in Cancer Detection

Codebook driven short-term predictor parameter estimation for speech enhancement

Modulation and Top-Down Processing in Audition

A Study on the Degree of Pronunciation Improvement by a Denture Attachment Using an Weighted-α Formant

International Forensic Science & Forensic Medicine Conference Naif Arab University for Security Sciences Riyadh Saudi Arabia

Consonant Perception test

COMBINING CATEGORICAL AND PRIMITIVES-BASED EMOTION RECOGNITION. University of Southern California (USC), Los Angeles, CA, USA

FIR filter bank design for Audiogram Matching

Development of a portable device for home monitoring of. snoring. Abstract

Automatic Identification of a Coughing Animal using Audio and Video Data

Robust Speech Detection for Noisy Environments

CONTACTLESS HEARING AID DESIGNED FOR INFANTS

Transcription:

PAGE 65 Performance of Gaussian Mixture Models as a Classifier for Pathological Voice Jianglin Wang, Cheolwoo Jo SASPL, School of Mechatronics Changwon ational University Changwon, Gyeongnam 64-773, Republic of Korea xiaowangyc@hotmail.com, cwjo@sarim.changwon.ac.kr Abstract This study focuses on the classification of pathological voice using GMM (Gaussian Mixture Model) and compares the results to the previous work which was done by A (Artificial eural etwork). Speech data from normal people and patients were collected, then diagnosed and classified into two different categories. Six characteristic parameters (Jitter, Shimmer, HR, SPI, APQ and RAP) were chosen. Then the classification method based on the artificial neural network and Gaussian mixture method was employed to discriminate the data into normal and pathological speech. The GMM method attained.4% average correct classification rate with training data and 95.2% average correct classification rate with test data. The different mixture number (3 to 5) of GMM was used in order to obtain an optimal condition for classification. We also compared the average classification rate based on GMM, A and HMM.. Introduction The diagnosis of pathological voice is a hot topic that has been received considerable attention. There are several medical diseases that adversely affect our human voice (Jo & Kim, 9; Dibazar & arayanan, 2002). The doctor can use the available apparatus for detection of pathological voice. However, it is invasive and requires an expert analysis of numerous human speech signal parameters. Automatic voice analysis for pathological speech has its advantages, such as having its quantitative and non-invasive nature, allowing the identification and monitoring of vocal system diseases and reducing analysis cost and time (Dibazar & arayanan, 2002). In the pathological voice classification, based on the voice of a patient, the goal is to make a decision whether it is normal or pathological. Successful pathological voice classification will enable an automatic non-invasive device to diagnose and analyze the voice of the patient. In the recent approaches to pathological voice classification, various pattern classification methods have been used. Several researches, such as classification of pathological voice including severely noisy cases (Li, Jo & Wang, 2004), pathological voice quality assessment using A (Ritchings, Mcgillion & Moore, 2002) and using shortterm cepstral parameters and neural network based detectors for automatic detection of voice impairments (Godino- Llorente & Gomez-Vilda, 2004), have recently been applied to various kinds of pathological classification tasks. Generally, A has been widely used because there is no need to think about the details of the mathematical models of the data and relatively easy to train and has produced a good pathological recognition performance. The major drawback to A method is that it depend on the data set and the A method can not guarantee a good performance in such cases when the total size of the data is small and when the new data is added to the original data set. In previous study, HMM-based method has also been conducted to automatically detect disordered speech. Dibazar et al. applied HMM to classify the pathological speech (Dibazar& arayanan, 2002; Wang & Jo, 2006). The GMM method is based on a finite mixture probability distribution model. And the method was successfully applied on robust speaker recognition system. GMM provides a robust speaker representation for the difficult task of speaker identification using corrupted, unconstrained speech (Reynolds, Rose & Smith, 9). The models are computationally inexpensive and easily implemented on a real-time platform. Furthermore, their probabilistic framework allows direct integration with speech recognition systems and incorporation of newly developed speech robustness techniques. The success of speaker identification provides us insight to apply the GMM method into pathological voice identification. In this paper the GMM method was used to classify the mixed voiced data set (pathological and normal voice) into normal and pathological voices. Six characteristic parameters were chosen. The Gaussian mixture model was used to classify the pathological voice. Finally, the results of GMM method were compared to the previous results done by A (Li et al., 2004) and HMM (Wang & Jo, 2006). Proceedings of the th Australian International Conference on Speech Science & Technology, ed. Paul Warren & Catherine I. Watson. ISB 0 95 2 9 University of Auckland, ew Zealand. December 6-8, 2006. Copyright, Australian Speech Science & Technology Association Inc.

2. base To collect voice data, collection system was installed in a room of the ET department of hospital. The recording process was executed semi-automatically with the intervention of operator to control the quality and procedure. Also the voice materials from the different male speakers were collected using DAT (Digital Audio Tape) (Jo, Kim, Kim, Wang & Jeon, 200). The sampling rate was 50 KHz and the resolution 6 bits. The collection was conducted in the soundproof room of a hospital. All the subjects were asked to pronounce a sustained vowel /a/. Total voice data included 4 normal cases and pathological cases (08 relatively less noisy voices and 3 severely noisy voices) after removing invalid data from the raw data sets. The vocal diseases considered in the relatively less noisy pathological cases consisted of Vocal Polyp, Vocal Cord Palsy, Vocal odule, Vocal Cyst, Vocal Edema and Laryngitis. The database of vocal fold diseases is shown in Table. Table. Vocal Fold Diseases Disease o. of Case Cyst 5 Edema 0 Laryngitis 5 odule 0 Palsy 0 Polyp 0 Glottic Cancer 58 Total 08 3. Methodology In this section, the used analysis procedures and methods are described and our study focuses on the classification between normal and pathological cases. The analysis method for classification of the severely noisy pathological voice was introduced in previous work (Li et al.). The MDVP (Multi-Dimensional Voice Program) was used as an analyzer to calculate the 6 different kinds of numerical parameters (Operations Manual, 993). The 6 numerical parameters are Jitter, RAP, Shimmer, APQ, HR and SPI. The 6 parameters are divided into 3 categories according to their characteristics. There are 2 pitch related parameters (Jitter, RAP), 2 amplitude related parameters (Shimmer, APQ) and 2 noise related parameters (HR, SPI). They can represent several aspects of the speech characteristic information. So the six parameters were chosen as the input in our experiments. Each parameter is defined as follows. 3.. Parameters 3... Jitter Jitter is relative evaluation of the period-to-period (very shotterm) variability of the pitch within the analyzed voice sample. Voice break areas are excluded. ( i+ ) To To - Jitt = () To To, i =, 2... is the extracted pitch period data and = PER is the number of extracted pitch periods. 3..2. RAP RAP (Relative Average Perturbation) is the relative evaluation of the period-to-period variability of the pitch within the analyzed voice sample with smoothing factor of 3 periods. Voice break areas are excluded. ( i ) ( i) ( i+ ) To + To + To To -2 2 3 RAP = (2) To To, i =, 2... is the extracted pitch period data and = PER is the number of extracted pitch periods. 3..3. Shimmer Shimmer Percent /%/ is relative evaluation of the period-toperiod (very short term) variability of the peak-to-peak amplitude within the analyzed voice sample. Voice break areas are excluded. ( i+ ) A A - Shim = (3) A A, i =, 2... is the extracted peak-to-peak amplitude data and = PER is the number of extracted impulses. 3..4. APQ APQ (Amplitude Perturbation Quotient) /%/ is relative evaluation of the period-to-period variability of the peak-topeak amplitude within the analyzed voice sample at smoothing of periods. Voice break areas are excluded. 4 4 ( i+ r) ( i+ 2) A A -4 5 r= 0 APQ = (4) A A, i =, 2... is the extracted peak-to-peak amplitude data and = PER is the number of extracted impulses. PAGE 66 Proceedings of the th Australian International Conference on Speech Science & Technology, ed. Paul Warren & Catherine I. Watson. ISB 0 95 2 9 University of Auckland, ew Zealand. December 6-8, 2006. Copyright, Australian Speech Science & Technology Association Inc.

3..5. HR HR (oise-to-harmonic Ratio) is the average ratio of the inharmonic spectral energy in the frequency range 500-4500 Hz to the harmonic spectral energy in the frequency range 70-4500 Hz. This is a general evaluation of noise present in the analyzed signal. 3..6. SPI SPI (Soft Phonation Index) is the average ratio of the lowerfrequency harmonic energy in the range 70-600 Hz to the higher-frequency harmonic energy in the range 600-4500 Hz. 3..7. GMM The classification of pathological voice used in this study was based on GMM-based approach, which consists of 3 steps. First, the characteristic parameters of pathological voice data were calculated. In our study, there were 6 characteristic parameters, which had been introduced above sections. Second, each GMM model with different Gaussian mixtures was trained. The GMM model was set with different Gaussian mixtures to obtain an optimal classification performance. Third, trained GMM models were used as a classifier to perform the classification tasks. The classification method of GMM is shown in Fig.. Train GMM Model Pathological Voice Classifier Figure.Block diagram of the method. 4. Experimental Results The experimental results for the classification of pathological voice are shown in this section. And the purpose of our experiment is to classify the mixed voice data set into normal and pathological voice and to compare the classification performance to the results of artificial neural network in previous work. So the same pre-condition was configured, such as the same data set, the same characteristic parameters and the same training times. Since the total number of data was small, we tried to train and test the A and GMM model by splitting total data sets into two parts. Two thirds of the data were used for training and the remaining one third of data was used for test. In each training stage, the artificial neural network and Gaussian mixture model were trained and tested separately using different combination of data sets, which is especially useful when few speech data are available for the model training and testing. It is able to compensate the small size of the data sets. And each stage for training and test was performed 5 times. The GMM based classifier was constructed to classify the normal voices and pathological voices after the computation of the 6 parameters. Total data combination sets were performed 5 times. Different Gaussian mixtures (3, 4 and 5) are set to compare which Gaussian mixture model can attain a high classification performance. Table 2 shows the classification rate from the GMM training and testing. The " st Run" means to run the first set of data and the others own the same meaning. From the Table 2, the highest classification is 5 Gaussian mixtures model. The average classification rate for training data is.4% and for test data is 95.2%. GMM Mixtures 3 Mixtures 4 Mixtures 5 Mixtures Table 2.The classification rate (%) in GMM. Performance Times Training st Run.0.0 2 nd Run 99.0.0 3 rd Run.0.0 4 th Run 95.0.0 5 th Run 95.0.0 st Run 97.0.0 2 nd Run.0 84.0 3 rd Run.0.0 4 th Run.9.0 5 th Run 97.0.0 st Run.0.0 2 nd Run 99.0.0 3 rd Run 00.0.0 4 th Run 97.0.0 5 th Run.0.0 5. Discussion In this section, we discuss some experimental results obtained from the proposed analysis methods. In our experiment, we also need to know which number of mixture can give us the optimal classification rate. So the different mixture number (3 to 5) of GMM has been trained to find the optimal number. In Fig. 2, it shows the classification rate for different mixtures (3 to 5). From the Fig. 2, the best mixture for test data is 5 and train data is. Due to the limited number of data, each time we used the different data set for train and test. In Fig. 2, the average classification rate for each mixture was calculated. 00% % % % % % % % 84% 3 4 5 6 7 8 9 0 2 3 4 5 Figure 2.The classification rate for different mixtures. Train PAGE 67 Proceedings of the th Australian International Conference on Speech Science & Technology, ed. Paul Warren & Catherine I. Watson. ISB 0 95 2 9 University of Auckland, ew Zealand. December 6-8, 2006. Copyright, Australian Speech Science & Technology Association Inc.

The confusion matrix can show us how many correct classification rates have been identified. The best result (5 GMM mixtures in 2 nd run) is presented through confusion matrices. In the Table 3 and 4, True positive (TP) is the ratio between normal voice correctly classified and the total number of normal voices. False negative (F) is the ratio between wrongly classified pathological voices and the total number of normal voices. True negative (T) is the ratio between pathological voices correctly classified and the total number of the pathological voices. False positive (FP) is the ratio between normal voices wrongly classified and the total number of pathological voices. From Table 4, GMM classification for test data shows a highly correct classification rate for normal and pathological voices. The F=2.8% means that some pathological speech data were misclassified as normal case, however, the T=97.2% demonstrates that most of the pathological speech data were correctly classified. The former case is serious for practical condition. Comparing Table 3 to 4, the difference of FP between train data and test data is 7.% and T between train data and test data is.4%. Table 3.The confusion matrix for Train data. In In ormal Pathological ormal TP=00% FP=0 Pathological F=.4% T=.6% Table 4.The confusion matrix for data. ormal Pathological ormal TP=.9% FP=7.% Pathological F=2.8% T=97.2% From the result, we can reason that the pathological speech voice owns so distinctive characteristics that GMM can recognize the data according to its characteristic parameters. In this part, the comparison among three methods (A, HMM and GMM) will be discussed. The A based approach had been introduced in the previous work (Li et al.). The different hidden layers (6, 9 and 2) were set to obtain the much better classification rate. The best performance of A is 2 hidden layers. The average classification rate for training data is.0% and for test data is.2%. From the experiment using six different sorts of parameters as an input and with the different structures model of GMM and A, the classification rate for discriminating the voice into the normal and pathological cases were obtained as shown in Table 2. The classification rates of test data were high so that the accurate classification could be realized. And it was clearly observed that the classification rate rose with the number of hidden layers and Gaussian mixtures increased and the model of A and GMM is more sophisticated. In GMM for each number of mixtures, the average classification rate for training data and test data was obtained as shown in Fig. 3. In A for each hidden layers, the average classification rate for training data and test data was calculated as shown in Fig. 4. In Fig. 5 it shows our previous result using HMM-based method to classify the pathological voice (Wang & Jo, 2006). Comparing the results between Fig. 3 and Fig. 4, the best GMM configuration showed slightly better classification compared to the best performance of A (0.4% in training data, % in test data). Although the absolute amount of the rate difference is small, it can present that GMM can be used as more robust classifier in pathological 00 00 00 3 Mixtures 4 Mixtures 5 Mixtures Figure 3.The average classification rate (%) in GMM. 3 Layers 9 Layers 2 Layers Figure 4.The average classification rate (%) in A. 3 States 5 States 7 States Figure 5.The average classification rate (%) in HMM. Table 5.The best case for each classification method Methods Train GMM.4 95.2 A.0.2 HMM* 93.8 9.7 *different characteristic parameter sets PAGE 68 Proceedings of the th Australian International Conference on Speech Science & Technology, ed. Paul Warren & Catherine I. Watson. ISB 0 95 2 9 University of Auckland, ew Zealand. December 6-8, 2006. Copyright, Australian Speech Science & Technology Association Inc.

voice classification giving us a comparable classification rate to A. In Fig. 3 and Fig. 5, GMM-based method for the best cases of train data and test data are 4.6% and 3.5% more than HMM-based method. However, HMM-based method used different characteristic parameters sets. In Table 5, it presents the best case of the average classification rate for each classification method. 6. Conclusions This paper has performed a classification of the pathological voice using GMM method as one of the trial to obtain a better classification performance than A method, which was previously performed. The pathological classification method based on GMM shows 0.4 % improvement with training data and % improvement with test data on the average. Although the amount of the difference is small, it is proved that GMM can be used effectively for the classification method of the pathological voice giving us more robustness in practical applications. Comparing HMM-based method to GMMmethod, the latter one can show us good results for the same pathological data. However, characteristic parameters are different in each experiment. So the direct comparison is not proper. In the future work, to improve the performance with real data, more investigations are required on the proper number of mixtures on Gaussian model and on the proper parameter sets. Also it is highly required to extend the size of the pathological voice database. system, Proceedings of Int. Conf Signal Processing Appl., Technol., 7-973, Boston, MA, USA. Ritchings, R.T., Mcgillion, M.A., & Moore, C.J. (2002). Pathological voice quality assessment using artificial neural network, Medical Engineering & Physics, 24, 8, 56-564, ELSEVIER. Wang, J. & Jo, C. (2006). Classification of Vocal Fold Disorder using Hidden Markov Models, Proceedings of the 23rd Korean Speech Communication & Signal Processing Conference, 229-232, Ansan, Korea. PAGE 69 7. References Dibazar, A. A. & arayanan, S. (2002). A System for Automatic Detection of Pathological Speech, Proceedings of 36th Asilomar Conf. Signals, Systems & Computers, Pacific Grove, CA, USA. Godino-Llorente, J.I. & Gomez-Vilda, P. ( 2004). Automatic detection of voice impairments by means of short-term cepstral parameters and neural network based detectors, IEEE Transactions on Biomedical Engineering, 5, 2, 380-384. Jo, C. & Kim, D. (9). Diagnosis of Pathological Speech Signals Using Wavelet Transform, Proceedings of International Technical Conference on Circuits/Systems, Computers and Communications, 657-660, Sokcho, Korea. Jo, C., Kim, K., Kim, D., Wang, S., & Jeon, G. (200). Comparisons of Acoustical Characteristics between ARS and DAT Voice, International Conference on Speech Processing, 9-953, Taejon, Korea. Li, T., Jo, C. & Wang, S. (2004). Classification of pathological voice including severely noisy cases, Proceedings of the 8th International Conference on Spoken Language Processing, I, 77-80, Jeju, Korea. Operations Manual (993). Multi-Dimensional Voice Program (MDVP) Model 4305, 93-3, Kay Elemetrics Corp. Reynolds, D. A., Rose, R. C. & Smith,M. J. T. (9). PCbased TMS320C30 implementation of the Gaussian mixture model text-independent speaker recognition Proceedings of the th Australian International Conference on Speech Science & Technology, ed. Paul Warren & Catherine I. Watson. ISB 0 95 2 9 University of Auckland, ew Zealand. December 6-8, 2006. Copyright, Australian Speech Science & Technology Association Inc.