Challenges in microphone array processing for hearing aids Volkmar Hamacher Siemens Audiological Engineering Group Erlangen, Germany
SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 2 Why directional microphones in hearing aids? Sensorineural hearing loss Reduced time and frequency resolution leads to a decreased speech intelligibility in noisy environment: SNR-loss typically 4-10 db (Dillon, 2001). Up to now, no direct compensation method exists => Increase SNR! Use of indirect methods Single-microphone noise reduction algorithms have not been shown to significantly improve speech intelligibility! Directional microphones implemented in hearing aids can improve SNR in a way that leads to improved speech intelligibility (e.g. Valente, 1995).
SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 3 Three-microphone array combination of 1 st and 2 nd order differential processing (TriMic ) AI-DI (KEMAR) approx. 6.2 db target signal A/D conversion not shown 3 microphone openings T 1 T 1 internal delay - - CF 1 1 st order LP (1100 Hz) Siemens Triano 3 three-microphone array end-fire arrangement 8 mm microphone spacing - T 2 compensation filter (low pass) CF 2 2 nd order HP (1100 Hz) +
SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 4 Performance Challenge SNR-Effect Dir.-Mic. vs. Omni Omni vs. Open Ear Net improvement +4.0 db appox. -0.5 db 3.5 db In many cases todays directional microphones do not provide enough SNR benefit to compensate for the 4-10 db SNR-loss caused by the hearing impairment. => Improve SNR by advanced adaptive microphone array processing algorithms
SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 5 Robustness Challenge Additional performance loss in real-life life use microphone mismatch array misalignment (e.g. non-0 or moving targets) earmold venting Improve robustness by adaptive microphone matching (amplitude and phase) beamforming design procedures with robustness against microphone mismatch (e.g. Doclo, 2003) adaptive target source tracking algorithms active noise control...
SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 6 Advanced multi-microphone microphone noise reduction algorithms Approaches known from the literature Beamforming for microphone arrays taking head shading effects into account (e.g. Meyer 2001) Adaptive beamforming (e.g. Greenberg et al. 1992, Kompis & Dillier 1993, 2001, Herbordt et al. 2002) Blind source separation (e.g. Parra 2000, Anemueller 2001) Coherence based filtering (e.g. Allen et al. 1977) Binaural Spectral Subtraction (Doerbecker 1996) Cocktail Party processors (e.g. Peissig 1992, Bodden 1992, Feng et al. 2001)...
SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 7 Parameters defining the acoustic scene Each algorithm is designed for specific acoustic scenes, described by: Spatial parameters: target source position number of noise sources position of noise sources reverberation dynamics (temporal changes)... Sound source characteristics: type (speech, music, etc) level spectrum stationarity...! However, in situations other than those specified many algorithms become ineffective or even make the SNR worse!
SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 8 Adaptive Beamformer for binaural hearing aids (Greenberg et al. 1992, Dillier & Kompis 1993) Assumptions: signal source exactly in 0 -position coherent noise incidence, i.e. one dominant lateral noise source 0 target signal x 1 (k) x 1 (k) = s(k) + n 1 (k) + + + p(k)= 2s(k) + n 1 (k) +n 2 (k) + + _ 2s(k) 0.5 s(k) monaural output coherent noise x 2 (k) x 2 (k) = s(k) + n 2 (k) + _ + r(k) = n 2 (k) - n 1 (k) Adaptive FIR filter w(k) Adaptation algorithm
SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 9 Adaptive Beamformer (cont d) Acoustic scene: one dominant lateral noise source target source nearly exactly in 0 -position target source noise hearing aid user
SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 10 Binaural Spectral Subtraction (Doerbecker et al., 1996) x 1( k ) x 2( k ) Analysefenster Analysefenster FFT FFT X 1( f ) = H1S( f ) + N1( f ) S ˆ1 ( f ) s ˆ1 ( k ) x X 1 ( f noise power density estimation X 2 ( f ) ) ˆ Φ n n ˆ 1 1 Φ n n 2 2 ( f ) ( f ) W 1 ( f ) subtraction rule subtraction rule W 2 ( f X 2( f ) = H 2S( f ) + N2( f ) ) x IFFT IFFT Overlap-add S ( f ) sˆ2 ( k) ˆ2 Overlap-add Assumptions coherent target signal, e.g. near speaker diffuse noise field Noise power density estimation based on correlation analysis X X Φ 1 2 ( f ) = H1( f ) S( f ) + N1( f ) ( f ) = H ( f ) S( f ) + N ( f ) n1n1 n2n2 2 with H 1 (f) = H 2 (f) the noise power density can be estimated: Φ ( f ) = Φ ( f ) = Φ x1x1 x2x2 2 ( f ) Φ ( f ) Φ x1x2 x1x2 ( f ) ( f )
SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 11 Binaural Spectral Subtraction (cont d) Acoustic scene: diffuse noise coherent target signals, i.e. target sources in the near field target source diffuse noise target source target source hearing aid user target source
SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 12 Control problem Prospects for the array-processing future : Hearing aids will offer a set of highly specialized multi-microphone noise reduction algorithms designed for specific acoustic scenes. Problem: probably the user will not be willing or be able to a) monitor the acoustic environment continuously, b) recognize the specific acoustic scenes, and c) activate the best algorithm. Manual control of directional microphones is already nowadays a severe problem for many hearing aid user (e.g. Kuk 1996, Cord et al. 2002). => Need of intelligent control systems in hearing aids based on continuous classification of the acoustic scene and automatic activation of the appropriate algorithm.
SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 13 Control challenge Todays classification & control systems rely only on statistical information derived from one microphone signal. Spatial parameters are not taken into account! => Appropriate classification & control systems must be developed, which determine the acoustic scene based on statistical and spatial information extracted from microphone-array analysis. Otherwise, the SNR-improvement offered by advanced adaptive microphone-array processing will probably not fully turn into customer benefit in real life.
SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 14 Limitation of classification & control acoustic scene hearing aid user target (preferred) source individual and time-varying decision hearing situation Example: target (preferred) source generally defined for each acoustic scene classification system discrepancies possible! activation of the appropriate algorithm hearing aid user 1 target source and 2 noise sources or 3 target sources? As long as the link to the brain is not available, manual control of the hearing aid operation occasionally will be necessary!
SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 15 Conclusions Microphone array processing is currently the only method to substantially improve the SNR. To fully compensate for the SNR-loss in everyday-life three groups of challenges have to be met: Robustness Performance (SNR) Automatic Control Advanced multi-microphone noise reduction algorithms differ in their assumptions concerning the acoustic scene => Need of automatic control systems. Classification & Control will never match 100% with listener s decisions. => Manual control occasionally is neccessary, when the actual hearing situation differs from the one estimated by the classification system.
Thank you for your attention! SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology V. Hamacher 16