Using the Soundtrack to Classify Videos
|
|
- Reynold Clarke
- 5 years ago
- Views:
Transcription
1 Using the Soundtrack to Classify Videos Dan Ellis Laboratory for Recognition and Organization of Speech and Audio Dept. Electrical Eng., Columbia Univ., NY USA 1. Machine Listening 2. Global Classification 3. Foreground Classification 4. Outstanding Issues Soundtrack Classification - Dan Ellis /16
2 1. Machine Listening Describe Automatic Narration Emotion Music Recommendation Classify Environment Awareness ASR Music Transcription Sound Intelligence VAD Speech/Music Environmental Sound Speech Dectect Task Extracting useful information from sound Soundtrack Classification - Dan Ellis Music Domain /16
3 The Information in Audio Environmental recordings contain info on: location type (restaurant, street,...) and specifics activity talking, walking, typing,... people generic (2 males), specific (Chuck & John) spoken content (sometimes) but not: what people and things looked like day/night except when correlated with audible features Soundtrack Classification - Dan Ellis /16
4 Applications Audio Lifelog Diarization 9: 9:3 1: 1:3 11: 11: preschool cafe preschool cafe lecture Ron L2 cafe office outdoor lecture DSP3 12: 12:3 13: office office outdoor group outdoor meeting2 compmtg 13:3 lab 14: 14:3 cafe meeting2 Manuel outdoor office cafe office Mike 15: Arroyo? office 15:3 Consumer Video Classification 16: office outdoor office postlec office Sambarta? Soundtrack Classification - Dan Ellis /16
5 Consumer Video Dataset 25 concepts from 1G+KL2 (1/15) Kodak user study boat, crowd, cheer, dance,... from YouTube search filter for raw, unedited = 1873 videos manually relabel with concepts Concept overlap: Soundtrack Classification - Dan Ellis museum picnic wedding animal birthday sunset ski graduation sports boat parade playground baby park beach dancing show group of two night one person singing cheer crowd music group of 3+ Labeled Concepts Grab top 2 videos 1 8GMM+Bha (9/15) mc c s o n g s d b p b p p b s g s s b a w pm Overlapped Concepts plsa5+lognorm (12/15) /16
6 2. Global Classification Baseline for soundtrack classification divide sound into short frames (e.g. 3 ms) calculate features (e.g. MFCC) for each frame describe clip by statistics of frames (mean, covariance) = bag of features Video Soundtrack MFCC features freq / khz MFCC bin VTS_4_1 - Spectrogram time / sec time / sec Classify by e.g. Mahalanobis distance + SVM Soundtrack Classification - Dan Ellis / level / db value MFCC dimension MFCC Covariance Matrix MFCC covariance MFCC dimension 5-5
7 Codebook Histograms Convert nonplanar distributions to multinomial MFCC(1) Global Gaussian Mixture Model 7 count Per-Category Mixture Component Histogram 15 MFCC features MFCC() Classify by distance on histograms KL, Chi-squared + SVM GMM mixture Soundtrack Classification - Dan Ellis /16
8 Average Precision Global Classification Results Lee & Ellis 1 Guessing MFCC + GMM Classifier Single Gaussian + KL2 8 GMM + Bha 124 GMM Histogram + 5 log(p(z c)/pz)) Wide range in performance audio (music, ski) vs. non-audio (group, night) large AP uncertainty on infrequent classes group of 3+ music crowd cheer singing one person night group of two show dancing beach park baby playground parade boat sports graduation ski sunset birthday animal wedding picnic museum MEAN Concept class Soundtrack Classification - Dan Ellis /16
9 Average Precision Combining with Video Video classification by SIFT codebooks Random Baseline Video Only Audio Only A+V Fusion one person group of 3+ music cheer group of 2 sport park crowd night animal singing shows ski dancing boat baby beach wedding museum parade playground Chang et al. 7 sunset picnic birthday Audio adds most for dancing, baby, museum... Soundtrack Classification - Dan Ellis /16 MAP
10 3. Foreground Classification Global vs. local class models tell-tale acoustics may be washed out in statistics try iterative realignment of HMMs: Lee, Ellis & Loui 1 YT baby 2: voice baby laugh freq / khz Old Way: All frames contribute freq / khz New Way: Limited temporal extents voice bg baby time / s voice voice baby laugh bg baby laugh background model shared by all clips time / s laugh Soundtrack Classification - Dan Ellis /16
11 Transient Features Cotton, Ellis, Loui 11 Onset detector finds energy bursts best SNR PCA basis to represent each 3 ms x auditory frq bag of transients Object-related... Soundtrack Classification - Dan Ellis /16
12 Nonnegative Matrix Factorization Decompose spectrograms into 5478 templates activation X = W H freq / Hz Original mixture Smaragdis Brown 3 Abdallah Plumbley 4 Virtanen 7 fast forgiving gradient descent algorithm 2D patches sparsity control... Basis 1 (L2) Basis 2 (L1) Basis 3 (L1) time / s Soundtrack Classification - Dan Ellis /16
13 4. Outstanding Issues How to separate foreground & background? How to exploit prior knowledge of sounds? How to make classification credible? Soundtrack Classification - Dan Ellis /16
14 pts + 2 global BGs Classifier Insight How can we understand classification results?.. to inspire confidence & enrich results Temporal breakdown freq / Hz Spectrogram : YT_beach_1.mpeg, Manual Clip-level Labels : beach, group of 3+, cheer 4k 3k 2k 1k etc speaker4 speaker3 speaker2 unseen1 Background global BG2 global BG1 music cheer ski singing parade dancing wedding sunset sports show playground Manual Frame-level Annotation drum wind voice voice cheer voice cheer voice voice voice voice water wave Viterbi Path by 27 state Markov Model Cluster examples picnic Soundtrack Classification - Dan Ellis /16 5 cluster 123 (baby)
15 Summary Machine Listening: Getting useful information from sound Environmental sound classification... from whole-clip statistics? Transients energy peaks... separate foreground background Classifier insight... for confidence & analysis Soundtrack Classification - Dan Ellis /16
16 References [Chang et al. 27] S.-F. Chang, D. Ellis, W. Jiang, K. Lee, A. Yanagawa, A. Loui, & J. Luo, Large-scale multimodal semantic concept detection for consumer video, Proc. ACM MIR, , 27. [Lee & Ellis 21] K. Lee & D. Ellis, Audio-Based Semantic Concept Classification for Consumer Video, IEEE Tr. Audio, Speech, Lang. Process. 18 (6): , Aug. 21. [Lee, Ellis & Loui 21] K. Lee, D. Ellis, & A. Loui, Detecting Local Semantic Concepts in Environmental Sounds using Markov Model based Clustering, Proc. ICASSP, , Dallas, 21. [Cotton, Ellis & Loui 211] C. Cotton, D. Ellis, & A. Loui, Soundtrack classification by transient events, Proc. ICASSP, to appear, Prague, 211. [Ellis, Zhang & McDermott 211] D. Ellis, X. Zheng, & J. McDermott, Classifying soundtracks with audio texture features, Proc. ICASSP, to appear, Prague, 211. Soundtrack Classification - Dan Ellis /16
LabROSA Research Overview
LabROSA Research Overview Dan Ellis Laboratory for Recognition and Organization of Speech and Audio Dept. Electrical Eng., Columbia Univ., NY USA dpwe@ee.columbia.edu http://labrosa.ee.columbia.edu/ 1.
More informationUsing Source Models in Speech Separation
Using Source Models in Speech Separation Dan Ellis Laboratory for Recognition and Organization of Speech and Audio Dept. Electrical Eng., Columbia Univ., NY USA dpwe@ee.columbia.edu http://labrosa.ee.columbia.edu/
More informationGeneral Soundtrack Analysis
General Soundtrack Analysis Dan Ellis oratory for Recognition and Organization of Speech and Audio () Electrical Engineering, Columbia University http://labrosa.ee.columbia.edu/
More informationSound Analysis Research at LabROSA
Sound Analysis Research at LabROSA Dan Ellis Laboratory for Recognition and Organization of Speech and Audio Dept. Electrical Eng., Columbia Univ., NY USA dpwe@ee.columbia.edu http://labrosa.ee.columbia.edu/
More informationUsing Speech Models for Separation
Using Speech Models for Separation Dan Ellis Comprising the work of Michael Mandel and Ron Weiss Laboratory for Recognition and Organization of Speech and Audio Dept. Electrical Eng., Columbia Univ., NY
More informationRecognition & Organization of Speech & Audio
Recognition & Organization of Speech & Audio Dan Ellis http://labrosa.ee.columbia.edu/ Outline 1 2 3 Introducing Projects in speech, music & audio Summary overview - Dan Ellis 21-9-28-1 1 Sound organization
More informationRobustness, Separation & Pitch
Robustness, Separation & Pitch or Morgan, Me & Pitch Dan Ellis Columbia / ICSI dpwe@ee.columbia.edu http://labrosa.ee.columbia.edu/ 1. Robustness and Separation 2. An Academic Journey 3. Future COLUMBIA
More informationRecognition & Organization of Speech and Audio
Recognition & Organization of Speech and Audio Dan Ellis Electrical Engineering, Columbia University http://www.ee.columbia.edu/~dpwe/ Outline 1 2 3 4 5 Introducing Tandem modeling
More informationNoise-Robust Speech Recognition Technologies in Mobile Environments
Noise-Robust Speech Recognition echnologies in Mobile Environments Mobile environments are highly influenced by ambient noise, which may cause a significant deterioration of speech recognition performance.
More informationAdaptation of Classification Model for Improving Speech Intelligibility in Noise
1: (Junyoung Jung et al.: Adaptation of Classification Model for Improving Speech Intelligibility in Noise) (Regular Paper) 23 4, 2018 7 (JBE Vol. 23, No. 4, July 2018) https://doi.org/10.5909/jbe.2018.23.4.511
More informationSound, Mixtures, and Learning
Sound, Mixtures, and Learning Dan Ellis Laboratory for Recognition and Organization of Speech and Audio (LabROSA) Electrical Engineering, Columbia University http://labrosa.ee.columbia.edu/
More informationEnhanced Feature Extraction for Speech Detection in Media Audio
INTERSPEECH 2017 August 20 24, 2017, Stockholm, Sweden Enhanced Feature Extraction for Speech Detection in Media Audio Inseon Jang 1, ChungHyun Ahn 1, Jeongil Seo 1, Younseon Jang 2 1 Media Research Division,
More informationOutline. Teager Energy and Modulation Features for Speech Applications. Dept. of ECE Technical Univ. of Crete
Teager Energy and Modulation Features for Speech Applications Alexandros Summariza(on Potamianos and Emo(on Tracking in Movies Dept. of ECE Technical Univ. of Crete Alexandros Potamianos, NatIONAL Tech.
More informationRecognition & Organization of Speech and Audio
Recognition & Organization of Speech and Audio Dan Ellis Electrical Engineering, Columbia University Outline 1 2 3 4 5 Sound organization Background & related work Existing projects
More informationSound Texture Classification Using Statistics from an Auditory Model
Sound Texture Classification Using Statistics from an Auditory Model Gabriele Carotti-Sha Evan Penn Daniel Villamizar Electrical Engineering Email: gcarotti@stanford.edu Mangement Science & Engineering
More informationSPEECH TO TEXT CONVERTER USING GAUSSIAN MIXTURE MODEL(GMM)
SPEECH TO TEXT CONVERTER USING GAUSSIAN MIXTURE MODEL(GMM) Virendra Chauhan 1, Shobhana Dwivedi 2, Pooja Karale 3, Prof. S.M. Potdar 4 1,2,3B.E. Student 4 Assitant Professor 1,2,3,4Department of Electronics
More informationEECS 433 Statistical Pattern Recognition
EECS 433 Statistical Pattern Recognition Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1 / 19 Outline What is Pattern
More informationNoise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise
4 Special Issue Speech-Based Interfaces in Vehicles Research Report Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise Hiroyuki Hoshino Abstract This
More informationHCS 7367 Speech Perception
Babies 'cry in mother's tongue' HCS 7367 Speech Perception Dr. Peter Assmann Fall 212 Babies' cries imitate their mother tongue as early as three days old German researchers say babies begin to pick up
More informationLecture 9: Speech Recognition: Front Ends
EE E682: Speech & Audio Processing & Recognition Lecture 9: Speech Recognition: Front Ends 1 2 Recognizing Speech Feature Calculation Dan Ellis http://www.ee.columbia.edu/~dpwe/e682/
More informationSupplemental Materials for Paper 113
Supplemental Materials for Paper 113 August 19, 2012 In this document, we include the following items: 1. Concept definition: the full list of our 101 concepts including actions, scenes, and objects. 2.
More informationAcoustic Signal Processing Based on Deep Neural Networks
Acoustic Signal Processing Based on Deep Neural Networks Chin-Hui Lee School of ECE, Georgia Tech chl@ece.gatech.edu Joint work with Yong Xu, Yanhui Tu, Qing Wang, Tian Gao, Jun Du, LiRong Dai Outline
More informationModulation and Top-Down Processing in Audition
Modulation and Top-Down Processing in Audition Malcolm Slaney 1,2 and Greg Sell 2 1 Yahoo! Research 2 Stanford CCRMA Outline The Non-Linear Cochlea Correlogram Pitch Modulation and Demodulation Information
More informationRobust Speech Detection for Noisy Environments
Robust Speech Detection for Noisy Environments Óscar Varela, Rubén San-Segundo and Luis A. Hernández ABSTRACT This paper presents a robust voice activity detector (VAD) based on hidden Markov models (HMM)
More informationEMOTION DETECTION FROM VOICE BASED CLASSIFIED FRAME-ENERGY SIGNAL USING K-MEANS CLUSTERING
EMOTION DETECTION FROM VOICE BASED CLASSIFIED FRAME-ENERGY SIGNAL USING K-MEANS CLUSTERING Nazia Hossain 1, Rifat Jahan 2, and Tanjila Tabasum Tunka 3 1 Senior Lecturer, Department of Computer Science
More informationRecognition & Organization of Speech and Audio
Recognition & Organization of Speech and Audio Dan Ellis Electrical Engineering, Columbia University http://www.ee.columbia.edu/~dpwe/ Outline 1 2 3 4 Introducing Robust speech recognition
More informationCOMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION RECOGNITION
Journal of Engineering Science and Technology Vol. 11, No. 9 (2016) 1221-1233 School of Engineering, Taylor s University COMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION
More informationSingle-Channel Sound Source Localization Based on Discrimination of Acoustic Transfer Functions
3 Single-Channel Sound Source Localization Based on Discrimination of Acoustic Transfer Functions Ryoichi Takashima, Tetsuya Takiguchi and Yasuo Ariki Graduate School of System Informatics, Kobe University,
More informationGender Based Emotion Recognition using Speech Signals: A Review
50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department
More informationHCS 7367 Speech Perception
Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold
More informationSPEECH EMOTION RECOGNITION: ARE WE THERE YET?
SPEECH EMOTION RECOGNITION: ARE WE THERE YET? CARLOS BUSSO Multimodal Signal Processing (MSP) lab The University of Texas at Dallas Erik Jonsson School of Engineering and Computer Science Why study emotion
More informationAnalysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information
Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion
More informationWhat is sound? Range of Human Hearing. Sound Waveforms. Speech Acoustics 5/14/2016. The Ear. Threshold of Hearing Weighting
Speech Acoustics Agnes A Allen Head of Service / Consultant Clinical Physicist Scottish Cochlear Implant Programme University Hospital Crosshouse What is sound? When an object vibrates it causes movement
More informationLecture 3: Perception
ELEN E4896 MUSIC SIGNAL PROCESSING Lecture 3: Perception 1. Ear Physiology 2. Auditory Psychophysics 3. Pitch Perception 4. Music Perception Dan Ellis Dept. Electrical Engineering, Columbia University
More informationYour hearing. Your solution.
Your hearing. Your solution. Name Date Your audiogram Loud Soft db 10 0 10 20 30 40 50 60 70 80 90 100 110 120 Low pitched High pitched 125 250 500 1000 2000 4000 8000 Hz Normal hearing Mild hearing loss
More informationFull Utilization of Closed-captions in Broadcast News Recognition
Full Utilization of Closed-captions in Broadcast News Recognition Meng Meng, Wang Shijin, Liang Jiaen, Ding Peng, Xu Bo. Hi-Tech Innovation Center, Institute of Automation Chinese Academy of Sciences,
More informationSUPPRESSION OF MUSICAL NOISE IN ENHANCED SPEECH USING PRE-IMAGE ITERATIONS. Christina Leitner and Franz Pernkopf
2th European Signal Processing Conference (EUSIPCO 212) Bucharest, Romania, August 27-31, 212 SUPPRESSION OF MUSICAL NOISE IN ENHANCED SPEECH USING PRE-IMAGE ITERATIONS Christina Leitner and Franz Pernkopf
More informationEffects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking
INTERSPEECH 2016 September 8 12, 2016, San Francisco, USA Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking Vahid Montazeri, Shaikat Hossain, Peter F. Assmann University of Texas
More informationCONSTRUCTING TELEPHONE ACOUSTIC MODELS FROM A HIGH-QUALITY SPEECH CORPUS
CONSTRUCTING TELEPHONE ACOUSTIC MODELS FROM A HIGH-QUALITY SPEECH CORPUS Mitchel Weintraub and Leonardo Neumeyer SRI International Speech Research and Technology Program Menlo Park, CA, 94025 USA ABSTRACT
More informationAction Recognition. Computer Vision Jia-Bin Huang, Virginia Tech. Many slides from D. Hoiem
Action Recognition Computer Vision Jia-Bin Huang, Virginia Tech Many slides from D. Hoiem This section: advanced topics Convolutional neural networks in vision Action recognition Vision and Language 3D
More informationECG Beat Recognition using Principal Components Analysis and Artificial Neural Network
International Journal of Electronics Engineering, 3 (1), 2011, pp. 55 58 ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network Amitabh Sharma 1, and Tanushree Sharma 2
More informationAnalysis of Speech Recognition Techniques for use in a Non-Speech Sound Recognition System
Analysis of Recognition Techniques for use in a Sound Recognition System Michael Cowling, Member, IEEE and Renate Sitte, Member, IEEE Griffith University Faculty of Engineering & Information Technology
More informationComputational Perception /785. Auditory Scene Analysis
Computational Perception 15-485/785 Auditory Scene Analysis A framework for auditory scene analysis Auditory scene analysis involves low and high level cues Low level acoustic cues are often result in
More informationEMOTIONS are one of the most essential components of
1 Hidden Markov Model for Emotion Detection in Speech Cyprien de Lichy, Pratyush Havelia, Raunaq Rewari Abstract This paper seeks to classify speech inputs into emotion labels. Emotions are key to effective
More informationUSING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES
USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332
More informationPutting the focus on conversations
Putting the focus on conversations A three-part whitepaper series Moving to a new technology platform presents hearing instrument manufacturers with opportunities to make changes that positively impact
More informationFacial expression recognition with spatiotemporal local descriptors
Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box
More informationAcoustic Sensing With Artificial Intelligence
Acoustic Sensing With Artificial Intelligence Bowon Lee Department of Electronic Engineering Inha University Incheon, South Korea bowon.lee@inha.ac.kr bowon.lee@ieee.org NVIDIA Deep Learning Day Seoul,
More informationSome Studies on of Raaga Emotions of Singers Using Gaussian Mixture Model
International Journal for Modern Trends in Science and Technology Volume: 03, Special Issue No: 01, February 2017 ISSN: 2455-3778 http://www.ijmtst.com Some Studies on of Raaga Emotions of Singers Using
More informationOnline Speaker Adaptation of an Acoustic Model using Face Recognition
Online Speaker Adaptation of an Acoustic Model using Face Recognition Pavel Campr 1, Aleš Pražák 2, Josef V. Psutka 2, and Josef Psutka 2 1 Center for Machine Perception, Department of Cybernetics, Faculty
More informationHearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds
Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more)
More informationARCHETYPAL ANALYSIS FOR AUDIO DICTIONARY LEARNING. Aleksandr Diment, Tuomas Virtanen
ARCHETYPAL ANALYSIS FOR AUDIO DICTIONARY LEARNING Aleksandr Diment, Tuomas Virtanen Tampere University of Technology Department of Signal Processing Korkeakoulunkatu 1, 33720, Tampere, Finland firstname.lastname@tut.fi
More informationPower Instruments, Power sources: Trends and Drivers. Steve Armstrong September 2015
Power Instruments, Power sources: Trends and Drivers Steve Armstrong September 2015 Focus of this talk more significant losses Severe Profound loss Challenges Speech in quiet Speech in noise Better Listening
More information! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics
2/14/18 Can hear whistle? Lecture 5 Psychoacoustics Based on slides 2009--2018 DeHon, Koditschek Additional Material 2014 Farmer 1 2 There are sounds we cannot hear Depends on frequency Where are we on
More informationI. INTRODUCTION. OMBARD EFFECT (LE), named after the French otorhino-laryngologist
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 18, NO. 6, AUGUST 2010 1379 Unsupervised Equalization of Lombard Effect for Speech Recognition in Noisy Adverse Environments Hynek Bořil,
More informationSPEECH recordings taken from realistic environments typically
IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 1 Coupled Dictionaries for Exemplar-based Speech Enhancement and Automatic Speech Recognition Deepak Baby, Student Member, IEEE, Tuomas Virtanen,
More informationReal-Time Implementation of Voice Activity Detector on ARM Embedded Processor of Smartphones
Real-Time Implementation of Voice Activity Detector on ARM Embedded Processor of Smartphones Abhishek Sehgal, Fatemeh Saki, and Nasser Kehtarnavaz Department of Electrical and Computer Engineering University
More informationThe SRI System for the NIST OpenSAD 2015 Speech Activity Detection Evaluation
INTERSPEECH 2016 September 8 12, 2016, San Francisco, USA The SRI System for the NIST OpenSAD 2015 Speech Activity Detection Evaluation Martin Graciarena 1, Luciana Ferrer 2, Vikramjit Mitra 1 1 SRI International,
More informationSpeech Enhancement Based on Deep Neural Networks
Speech Enhancement Based on Deep Neural Networks Chin-Hui Lee School of ECE, Georgia Tech chl@ece.gatech.edu Joint work with Yong Xu and Jun Du at USTC 1 Outline and Talk Agenda In Signal Processing Letter,
More informationSpeech Enhancement Using Deep Neural Network
Speech Enhancement Using Deep Neural Network Pallavi D. Bhamre 1, Hemangi H. Kulkarni 2 1 Post-graduate Student, Department of Electronics and Telecommunication, R. H. Sapat College of Engineering, Management
More informationSpeech Enhancement Based on Spectral Subtraction Involving Magnitude and Phase Components
Speech Enhancement Based on Spectral Subtraction Involving Magnitude and Phase Components Miss Bhagat Nikita 1, Miss Chavan Prajakta 2, Miss Dhaigude Priyanka 3, Miss Ingole Nisha 4, Mr Ranaware Amarsinh
More informationError Detection based on neural signals
Error Detection based on neural signals Nir Even- Chen and Igor Berman, Electrical Engineering, Stanford Introduction Brain computer interface (BCI) is a direct communication pathway between the brain
More informationCHAPTER 1 INTRODUCTION
CHAPTER 1 INTRODUCTION 1.1 BACKGROUND Speech is the most natural form of human communication. Speech has also become an important means of human-machine interaction and the advancement in technology has
More informationAudio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain
Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain F. Archetti 1,2, G. Arosio 1, E. Fersini 1, E. Messina 1 1 DISCO, Università degli Studi di Milano-Bicocca, Viale Sarca,
More informationSource and Description Category of Practice Level of CI User How to Use Additional Information. Intermediate- Advanced. Beginner- Advanced
Source and Description Category of Practice Level of CI User How to Use Additional Information Randall s ESL Lab: http://www.esllab.com/ Provide practice in listening and comprehending dialogue. Comprehension
More informationAUDIO-VISUAL EMOTION RECOGNITION USING AN EMOTION SPACE CONCEPT
16th European Signal Processing Conference (EUSIPCO 28), Lausanne, Switzerland, August 25-29, 28, copyright by EURASIP AUDIO-VISUAL EMOTION RECOGNITION USING AN EMOTION SPACE CONCEPT Ittipan Kanluan, Michael
More informationTandem modeling investigations
Tandem modeling investigations Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 What makes Tandem successful? Can we make Tandem better? Does Tandem
More informationConsonant Perception test
Consonant Perception test Introduction The Vowel-Consonant-Vowel (VCV) test is used in clinics to evaluate how well a listener can recognize consonants under different conditions (e.g. with and without
More informationVoice Detection using Speech Energy Maximization and Silence Feature Normalization
, pp.25-29 http://dx.doi.org/10.14257/astl.2014.49.06 Voice Detection using Speech Energy Maximization and Silence Feature Normalization In-Sung Han 1 and Chan-Shik Ahn 2 1 Dept. of The 2nd R&D Institute,
More informationVIRTUAL ASSISTANT FOR DEAF AND DUMB
VIRTUAL ASSISTANT FOR DEAF AND DUMB Sumanti Saini, Shivani Chaudhry, Sapna, 1,2,3 Final year student, CSE, IMSEC Ghaziabad, U.P., India ABSTRACT For the Deaf and Dumb community, the use of Information
More informationYour hearing. Your solution.
Your hearing. Your solution. Patient Name Date Provider Name Next appointment date/time Your audiogram soft -10 0 low 125 250 500 1000 2000 4000 8000 high Pitch in Hz Normal 10 Intensity in db 20 30 40
More informationEMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS
EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS 1 KRISHNA MOHAN KUDIRI, 2 ABAS MD SAID AND 3 M YUNUS NAYAN 1 Computer and Information Sciences, Universiti Teknologi PETRONAS, Malaysia 2 Assoc.
More informationAdvanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment
DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited Advanced Audio Interface for Phonetic Speech Recognition in a High Noise Environment SBIR 99.1 TOPIC AF99-1Q3 PHASE I SUMMARY
More informationEmotion Recognition using a Cauchy Naive Bayes Classifier
Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method
More informationRobust Neural Encoding of Speech in Human Auditory Cortex
Robust Neural Encoding of Speech in Human Auditory Cortex Nai Ding, Jonathan Z. Simon Electrical Engineering / Biology University of Maryland, College Park Auditory Processing in Natural Scenes How is
More informationFormulating Emotion Perception as a Probabilistic Model with Application to Categorical Emotion Classification
Formulating Emotion Perception as a Probabilistic Model with Application to Categorical Emotion Classification Reza Lotfian and Carlos Busso Multimodal Signal Processing (MSP) lab The University of Texas
More informationThe Unifi-EV Protocol for Evalita 2009
AI*IA 2009: International Conference on Artificial Intelligence EVALITA Workshop Reggio Emilia, 12 Dicembre 2009 The Unifi-EV2009-1 Protocol for Evalita 2009 Prof. Ing. Monica Carfagni, Ing. Matteo Nunziati
More informationCombination of Bone-Conducted Speech with Air-Conducted Speech Changing Cut-Off Frequency
Combination of Bone-Conducted Speech with Air-Conducted Speech Changing Cut-Off Frequency Tetsuya Shimamura and Fumiya Kato Graduate School of Science and Engineering Saitama University 255 Shimo-Okubo,
More informationHISTOGRAM EQUALIZATION BASED FRONT-END PROCESSING FOR NOISY SPEECH RECOGNITION
2 th May 216. Vol.87. No.2 25-216 JATIT & LLS. All rights reserved. HISTOGRAM EQUALIZATION BASED FRONT-END PROCESSING FOR NOISY SPEECH RECOGNITION 1 IBRAHIM MISSAOUI, 2 ZIED LACHIRI National Engineering
More informationAutomatic Signer Diarization the Mover is the Signer Approach
2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Automatic Signer Diarization the Mover is the Signer Approach Binyam Gebrekidan Gebre 1, Peter Wittenburg 1, Tom Heskes 2 1 Max
More informationTelephone Based Automatic Voice Pathology Assessment.
Telephone Based Automatic Voice Pathology Assessment. Rosalyn Moran 1, R. B. Reilly 1, P.D. Lacy 2 1 Department of Electronic and Electrical Engineering, University College Dublin, Ireland 2 Royal Victoria
More informationM. Shahidur Rahman and Tetsuya Shimamura
18th European Signal Processing Conference (EUSIPCO-2010) Aalborg, Denmark, August 23-27, 2010 PITCH CHARACTERISTICS OF BONE CONDUCTED SPEECH M. Shahidur Rahman and Tetsuya Shimamura Graduate School of
More informationUnit 1.P.2: Sensing Sound
Unit 1.P.2: Sensing Sound Sensing sound Experiencing sounds Science skills: Predicting Observing Classifying By the end of this unit you should know: We use our senses to detect sound. We hear things with
More informationAuditory Scene Analysis: phenomena, theories and computational models
Auditory Scene Analysis: phenomena, theories and computational models July 1998 Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 4 The computational
More informationLIE DETECTION SYSTEM USING INPUT VOICE SIGNAL K.Meena 1, K.Veena 2 (Corresponding Author: K.Veena) 1 Associate Professor, 2 Research Scholar,
International Journal of Pure and Applied Mathematics Volume 117 No. 8 2017, 121-125 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: 10.12732/ijpam.v117i8.25
More informationNoise Ordinance Update PS&I 4/12/2018
Noise Ordinance Update PS&I 4/12/2018 1 Noise Ordinance Update Questions/issues from previous PSI meeting Incentivizing mitigation of existing mechanical equipment noise Changing building code for more
More informationAccessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)
Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Matt Huenerfauth Raja Kushalnagar Rochester Institute of Technology DHH Auditory Issues Links Accents/Intonation Listening
More informationFrequency Tracking: LMS and RLS Applied to Speech Formant Estimation
Aldebaro Klautau - http://speech.ucsd.edu/aldebaro - 2/3/. Page. Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation ) Introduction Several speech processing algorithms assume the signal
More informationGENERALIZATION OF SUPERVISED LEARNING FOR BINARY MASK ESTIMATION
GENERALIZATION OF SUPERVISED LEARNING FOR BINARY MASK ESTIMATION Tobias May Technical University of Denmark Centre for Applied Hearing Research DK - 2800 Kgs. Lyngby, Denmark tobmay@elektro.dtu.dk Timo
More informationSpeech to Text Wireless Converter
Speech to Text Wireless Converter Kailas Puri 1, Vivek Ajage 2, Satyam Mali 3, Akhil Wasnik 4, Amey Naik 5 And Guided by Dr. Prof. M. S. Panse 6 1,2,3,4,5,6 Department of Electrical Engineering, Veermata
More information16.400/453J Human Factors Engineering /453. Audition. Prof. D. C. Chandra Lecture 14
J Human Factors Engineering Audition Prof. D. C. Chandra Lecture 14 1 Overview Human ear anatomy and hearing Auditory perception Brainstorming about sounds Auditory vs. visual displays Considerations for
More informationSymDetector: Detecting Sound-Related Respiratory Symptoms Using Smartphones
SymDetector: Detecting Sound-Related Respiratory Symptoms Using Smartphones Xiao Sun, Zongqing Lu, Wenjie Hu, Guohong Cao Department of Computer Science and Engineering The Pennsylvania State University,
More informationUse the following checklist to ensure that video captions are compliant with accessibility guidelines.
Table of Contents Purpose 2 Objective 2 Scope 2 Technical Background 2 Video Compliance Standards 2 Section 508 Standards for Electronic and Information Technology... 2 Web Content Accessibility Guidelines
More informationLecturer: T. J. Hazen. Handling variability in acoustic conditions. Computing and applying confidence scores
Lecture # 20 Session 2003 Noise Robustness and Confidence Scoring Lecturer: T. J. Hazen Handling variability in acoustic conditions Channel compensation Background noise compensation Foreground noises
More informationBlue Eyes Technology
Blue Eyes Technology D.D. Mondal #1, Arti Gupta *2, Tarang Soni *3, Neha Dandekar *4 1 Professor, Dept. of Electronics and Telecommunication, Sinhgad Institute of Technology and Science, Narhe, Maharastra,
More informationComputational Auditory Scene Analysis: An overview and some observations. CASA survey. Other approaches
CASA talk - Haskins/NUWC - Dan Ellis 1997oct24/5-1 The importance of auditory illusions for artificial listeners 1 Dan Ellis International Computer Science Institute, Berkeley CA
More informationDecision tree SVM model with Fisher feature selection for speech emotion recognition
Sun et al. EURASIP Journal on Audio, Speech, and Music Processing (2019) 2019:2 https://doi.org/10.1186/s13636-018-0145-5 RESEARCH Decision tree SVM model with Fisher feature selection for speech emotion
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 13 http://acousticalsociety.org/ ICA 13 Montreal Montreal, Canada - 7 June 13 Engineering Acoustics Session 4pEAa: Sound Field Control in the Ear Canal 4pEAa13.
More informationHOW TO USE THE SHURE MXA910 CEILING ARRAY MICROPHONE FOR VOICE LIFT
HOW TO USE THE SHURE MXA910 CEILING ARRAY MICROPHONE FOR VOICE LIFT Created: Sept 2016 Updated: June 2017 By: Luis Guerra Troy Jensen The Shure MXA910 Ceiling Array Microphone offers the unique advantage
More informationHearing the Universal Language: Music and Cochlear Implants
Hearing the Universal Language: Music and Cochlear Implants Professor Hugh McDermott Deputy Director (Research) The Bionics Institute of Australia, Professorial Fellow The University of Melbourne Overview?
More information