Speech Science, School of Psychology, The University of Auckland, New Zealand

Similar documents
Behavioural assessment of listening difficulties in people with unilateral hearing loss

Speech perception in noise for bilingual listeners with normal hearing

PSYCHOMETRIC VALIDATION OF SPEECH PERCEPTION IN NOISE TEST MATERIAL IN ODIA (SPINTO)

Cochlear Implantation for Single-Sided Deafness in Children and Adolescents

Assessing Hearing and Speech Recognition

ABSTRACT INTRODUCTION

Speech perception of hearing aid users versus cochlear implantees

A PROPOSED MODEL OF SPEECH PERCEPTION SCORES IN CHILDREN WITH IMPAIRED HEARING

HCS 7367 Speech Perception

Spatial processing in adults with hearing loss

Proceedings of Meetings on Acoustics

Acceptable Noise Levels and Speech Perception in Noise for Children With Normal Hearing and Hearing Loss

What Is the Difference between db HL and db SPL?

Cochlear Implants and SSD: Initial Findings With Adults: Implications for Children

Effects of noise and filtering on the intelligibility of speech produced during simultaneous communication

Predicting Directional Hearing Aid Benefit for Individual Listeners

Simultaneous Bilateral Cochlear Implantation in Adults: A Multicenter Clinical Study

Hearing in Noise Test in Subjects With Conductive Hearing Loss

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

Proceedings of Meetings on Acoustics

Sound localization psychophysics


Early Outcomes After Cochlear Implantation for Adults and Children With Unilateral Hearing Loss

Speech intelligibility in simulated acoustic conditions for normal hearing and hearing-impaired listeners

MEASUREMENTS AND EQUIPMENT FOR AUDIOLOGICAL EVALUATIONS

WIDEXPRESS. no.30. Background

Critical appraisal of speech in noise tests: a systematic review and survey

Practice Guidance Assessment of speech understanding in noise in adults with hearing difficulties

Yun, I.J. M.Cl.Sc. (Aud) Candidate School of Communication Sciences and Disorders, U.W.O.

Evaluation of sentence list equivalency for the TIMIT sentences by cochlear implant recipients

Cochlear implants for children and adults with severe to profound deafness

Outcomes in Implanted Teenagers Who Do Not Meet the UK Adult Candidacy Criteria

Binaural advantages in users of bimodal and bilateral cochlear implant devices

Cochlear Implant Candidacy Programming Protocol, Adult Ear & Hearing Center for Neurosciences

Lindsay De Souza M.Cl.Sc AUD Candidate University of Western Ontario: School of Communication Sciences and Disorders

DO NOT DUPLICATE. Copyrighted Material

EXECUTIVE SUMMARY Academic in Confidence data removed

Influence of acoustic complexity on spatial release from masking and lateralization

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

Bilateral Cochlear Implant Guidelines Gavin Morrison St Thomas Hearing Implant Centre London, UK

Binaural processing of complex stimuli

The Words-in-Noise Test (WIN): English and Spanish

Critical Review: Speech Perception and Production in Children with Cochlear Implants in Oral and Total Communication Approaches

Paediatric Amplification

ASR. ISSN / Audiol Speech Res 2017;13(3): / RESEARCH PAPER

Using QuickSIN Speech Material to Measure Acceptable Noise Level for Adults with Hearing Loss

CORTICAL AUDITORY EVOKED POTENTIAL (CAEP) AND BEHAVIOURAL MEASURES OF AUDITORY FUNCTION IN AN ADULT WITH A SINGLE SIDED DEAFNESS: CASE STUDY

FM for Cochlear Implants Chapter 13. Effects of Accessory-Mixing Ratio on Performance with Personal FM and Cochlear Implants

Speech Cue Weighting in Fricative Consonant Perception in Hearing Impaired Children

HCS 7367 Speech Perception

5. Please state what you believe the wording should be changed to:

Speech perception in individuals with dementia of the Alzheimer s type (DAT) Mitchell S. Sommers Department of Psychology Washington University

MULTI-CHANNEL COMMUNICATION

Evaluation of a Danish speech corpus for assessment of spatial unmasking

Recognition of Multiply Degraded Speech by Young and Elderly Listeners

Speech segregation in rooms: Effects of reverberation on both target and interferer

Auditory Processing Impairments Under Background Noise in Children with Non-syndromic Cleft Lip and/or Palate

The effect of wearing conventional and level-dependent hearing protectors on speech production in noise and quiet

Role of F0 differences in source segregation

Cochlear Implant. Description

Speech recognition in reverberation in biomodal cochlear implant users

Andres, A. M.Cl.Sc (AUD.) Candidate The University of Western Ontario: School of Communication Sciences and Disorders

Optimizing Dynamic Range in Children Using the Nucleus Cochlear Implant

NIH Public Access Author Manuscript J Hear Sci. Author manuscript; available in PMC 2013 December 04.

NATIONAL INSTITUTE FOR HEALTH AND CARE EXCELLENCE GUIDANCE EXECUTIVE (GE)

EFFECT OF DIRECTIONAL STRATEGY ON AUDIBILITY OF SOUNDS IN THE ENVIRONMENT. Charlotte Jespersen, MA Brent Kirkwood, PhD Jennifer Groth, MA

Speech Recognition in Noise for Hearing- Impaired Subjects : Effects of an Adaptive Filter Hearing Aid

Efficacy of Individual Computer-Based Auditory Training for People with Hearing Loss: A Systematic Review of the Evidence

THE ROLE OF VISUAL SPEECH CUES IN THE AUDITORY PERCEPTION OF SYNTHETIC STIMULI BY CHILDREN USING A COCHLEAR IMPLANT AND CHILDREN WITH NORMAL HEARING

Medical Affairs Policy

Providing Effective Communication Access

The Pennsylvania State University. The Graduate School. College of Health and Human Development BINAURAL SOUND FIELD PRESENTATION OF THE QUICKSIN:

9/13/2017. When to consider CI or BAHA evaluation? Krissa Downey, AuD, CCC A

Localization in speech mixtures by listeners with hearing loss

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag

GSI AUDIOSTAR PRO CLINICAL TWO-CHANNEL AUDIOMETER

SPEECH PERCEPTION IN A 3-D WORLD

Policy #: 018 Latest Review Date: June 2014

Analysis of post implantation speech recognition abilities of children with hearing impairment using cochlear implants

Possibilities for the evaluation of the benefit of binaural algorithms using a binaural audio link

Ref. 1: Materials and methods Details

Peter S Roland M.D. UTSouthwestern Medical Center Dallas, Texas Developments

Wireless Technology - Improving Signal to Noise Ratio for Children in Challenging Situations

Best Practice Protocols

Quick Practice Guideline

LANGUAGE IN INDIA Strength for Today and Bright Hope for Tomorrow Volume 10 : 9 September 2010 ISSN

Bimodal listening or bilateral CI: When and why?

Prescribe hearing aids to:

Effects of Setting Thresholds for the MED- EL Cochlear Implant System in Children

Oscilla - excellence since 1960

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 22 (1998) Indiana University

Effect of Minimal Hearing Loss on Children s Ability to Multitask in Quiet and in Noise

Report Documentation Page

Assessing Hearing Aid Fittings: An Outcome Measures Battery Approach

Differential-Rate Sound Processing for Cochlear Implants

* Brian C. Gartrell, Heath G. Jones, Alan Kan, Melanie Buhr-Lawler, * Samuel P. Gubbels, and * Ruth Y. Litovsky

C HAPTER FOUR. Audiometric Configurations in Children. Andrea L. Pittman. Introduction. Methods

Effects of Multi-Channel Compression on Speech Intelligibility at the Patients with Loudness-Recruitment

Ting Zhang, 1 Michael F. Dorman, 2 and Anthony J. Spahr 2

Binaural Versus Monaural Listening in Young Adults in Competing Environments. A Senior Honors Thesis

Transcription:

Spatial speech recognition in noise: Normative data for sound field presentation of the New Zealand recording of the Bamford-Kowal-Bench (BKB) sentences and Consonant- Nucleus-Consonant (CNC) monosyllabic words Oscar M. Cañete, Suzanne C. Purdy Speech Science, School of Psychology, The University of Auckland, New Zealand Author Contact: Oscar Cañete tmocanete@gmail.com Abstract Normal hearing adults were tested with the New Zealand recordings of the Bamford-Kowal- Bench (BKB) sentences and the Consonant-Nucleus-Consonant (CNC) monosyllabic words in order to establish list equivalence and obtain normative data for sound field presentation. CNC words were presented at a fixed level (65 db SPL) at +5 db signal to noise ratio (SNR). An adaptive task was used to measure speech recognition thresholds (db SNR) for BKB sentences, with a fixed noise level of 60 db SPL. Noise consisted of 100-talker babble. Pairs of BKB lists were used for the adaptive task. After removal of the word and sentence lists with the greatest differences, lists were equivalent but linguistic background continued to have a significant effect on scores, with moderately large effect sizes. English monolingual speakers had better performance than bilingual speakers for both CNC words and BKB sentences. Normative speech scores are presented for lateralized speech in noise recognition (speech and noise at +/-45 azimuth with right and left side presentations for CNCs and BKBs) and speech and noise in the front at 0 azimuth for BKBs. Combined results for all participants and for monolingual versus bilingual participants are presented. Introduction One of the main goals of speech perception assessment is to estimate a person s auditory capacity to recognise spoken language in everyday listening conditions (Mackersie, 2002). Speech in noise tests better represent real life listening conditions than testing in quiet, since one of the most common complaints of people with hearing loss is speech perception in noise and pure tone hearing thresholds do not predict speech in noise abilities very well in people with sensorineural hearing loss (Killion & Niquette, 2000). Accurate estimation of speech perception in noise is especially important for hearing aid fitting (Taylor, 2003) and for assessment of auditory processing (Jerger & Musiek, 2000). Currently several tests of speech in noise tests are available for clinical use, such as the Hearing in Noise Test, HINT (Nilsson, Soli, & Sullivan, 1994), Quick Speech-in-Noise, Quick SIN (Killion, Niquette, Gudmundsen, Revit, & Banerjee, 2004), Bamford-Kowal-Bench Speech-in-Noise Test, BKB- SIN (Etymotic Research, 2005) and the Words-in-Noise test, WIN (Wilson, 2003).

One of the more commonly used speech tests is the Bamford-Kowal-Bench (BKB) sentence lists originally developed by Bench and colleagues for partially deaf British children aged eight to 15 years (Bench, Kowal, & Bamford, 1979). The BKB lists were designed to have appropriate grammar and semantic content for young children with hearing loss. The original BKB test consisted of 21 lists of 16 sentences each. Scoring was based on correct recognition of key words (50 words per list) (Bench et al., 1979). Using the same method as in the original test an Australian version (BKB/A) was developed in 1979 (Bench & Doyle, 1979). Bench et al (1987) reported that the BKB/A sentences were more sensitive to the effects of hearing loss on speech perception than the NAL-CID sentences and CAL-PBM commonly used in Australia at that time. The Consonant-Nucleus-Consonant (CNC) monosyllabic words were developed in the United States in the 1960s (Peterson & Lehiste, 1962). The test is presented in open-set format and consists of 10 lists of 50 words each (150 phonemes per list). The words were selected to have similar phonemic distribution to the English language, with a minimal frequency of one per million according to the Thorndike and Lorge 1944 frequency count (Bontrager, 1991) and equivalent phonemic distribution across lists (Lehiste & Peterson, 1959). CNC words are widely used clinically as part of the Minimum Speech Test Battery for Adult Cochlear Implant Users (Luxford, 2001) To our knowledge there are no published normative data for CNC words and BKB sentences for the Speech Perception Assessment New Zealand (SPANZ) recordings of these lists (Kim & Purdy, 2014). The aim of this study was to: 1) determine list equivalence, and 2) obtain BKB NZ and CNC normative scores for normal hearing New Zealand adults for spatial speech recognition in noise using these speech lists. Method Participants: All participants' hearing thresholds were less than or equal to 20 db HL, with no difference greater than 5 db HL between ears at any frequency. All participants had negative middle ear histories and normal Type A tympanograms in both ears. Some participants were tested with both types of speech materials. Participant characteristics are summarised in Table 1. All participants were proficient English speakers but within each group a proportion of participants were simultaneous or serial bilinguals, i.e, a different language was spoken within their early home environment, with English exposure occurring simultaneously or by school age.

Table 1. Characteristics of the participants tested using BKB sentences and CNC words BKB sentences, spatially separated speech & noise BKB sentences, speech and noise in front CNC words, spatially separated speech & noise N 18 20 26 Age (SD)(years) 23.72 (3.57) 23.60 (3.27) 22.77 (3.77) Range (years) 20-33 20-33 20-38 Gender Male 3 4 6 Female 15 16 20 Language Monolingual 10 7 10 Bilingual* 8 13 16 * Korean, Hindi, German and Mandarin languages Equipment and Speech Material: Pure tone thresholds were measured at octave frequencies (0.25 8 khz) using an AVANT A 2 D audiometer and TDH39 earphones. Middle ear status was checked by measuring 226 Hz tympanograms using a GSI Tympstar immittance meter. Speech materials included the New Zealand modification (Kim & Purdy, 2014) of the Bamford-Kowal-Bench/Australian version (BKB/A) (Bench, Doyle, & Greenwood, 1987), which has 21 lists of 16 sentences each, recorded using a female voice (New Zealand English native speaker) and the CNC monosyllabic words (Peterson & Lehiste, 1962), which has 10 lists of 50 items recorded using a New Zealand English native male speaker. Speech materials were recorded in a sound studio with 16-bit resolution and 44.1- khz sampling frequency and normalized to -1 db (Kim & Purdy, 2014). Multi-talker babble was presented via a DELL laptop and consisted of a seven second segment of the NOISEX-92 speech babble recording with minimal amplitude variation that was looped to generate several minutes of babble. The NOISEX-92 babble original recording consists of 100 people speaking in a canteen (room radius > 2 m), recorded using a half-inch Brüel & Kjaer condenser microphone onto digital audio tape (Varga & Steeneken, 1993). List 1 (items 9 and 11), List 3 (item 4) and List 4 (items 2 and 15) from the BKB/A version of the BKB sentences were modified to make the language more appropriate for New Zealand listeners. Procedure: Spatial speech recognition in noise testing was performed in the sound field; the loudspeaker setup comprises three speakers placed 1 m from the participant, with the speaker centres at approximately head height at 45, 0, and 45 azimuth (Figure 1). The following conditions were tested for words and sentences: a) right ear condition (REC): signal to right ear/noise to left ear, b) left ear condition (LEC): signal to left ear/ noise to right ear and c) signal/noise in front (SNF).

Figure 1. Spatial speech (S) in noise (N) test setup showing right ear condition (REC), left ear condition (LEC) and speech and noise front (SNF) A fixed presentation level was used for CNC words and an adaptive procedure was used for BKB sentences. The level of the CNC words through the loudspeakers was set at 65 db SPL (linear weighting) and the multi-talker babble noise was fixed at 60 db SPL (+5 db signal to noise ratio, SNR). Whole word scoring was used and percent correct responses (%) were determined. For BKB sentences, recognition thresholds (SRTs) in noise defined as the SNR that produces 50% correct whole sentence recognition were measured using an adaptive (two-up/ two-down) procedure (Plomp & Mimpen, 1979). Noise was presented at a fixed level (60 db SPL) and the speech level was adjusted according to the participant s response. The SRT in noise was determined for the REC, LEC and SNF conditions. For the BKB sentences the two first sentences served as practice items; if these were repeated correctly the speech level was decreased by 4 db (initial step size). If the next sentence was repeated correctly the signal was decreased by 2 db; if repeated incorrectly, the signal was increased by 2 db (Ruscetta, Arjmand, & Pratt, 2005). BKB lists were divided into pairs of lists to have sufficient sentences to measure each SRT. After 26 sentences (from two lists), an average of the presentation levels was obtained for the last 10 sentences and the SRT value corresponding to the 50% correct identification level was calculated based on this average. Each test consisted of a pair of BKB lists (1-2, 3-4, 5-6, 7-8, 9-10, 11-12, 13-14, 15-16, 17-18). To reduce list order effects when determining list equivalence, participants tested using CNCs words and BKB sentences were divided into three groups of 13 (CNC, REC and LEC conditions), 9 (BKB, REC and LEC conditions) and 20 (BKB, SNF condition) participants each. Within each group, list order was randomised and the order of speech/noise presentation side was counterbalanced across groups for lateralised conditions (REC, LEC) (Table 2).

Table 2. CNC word and BKB sentence list test sequence. CNC words and BKB sentences were presented in counterbalanced order (blocks 1 and 2; participant groups 1 and 2) when determining list equivalence. Speech test Block Group 1 Group 2 CNC words Block 1 Lists 1-5 to right side Lists 1-5 to left side (n=13 per group) Noise to left side Noise to right side Block 2 Lists 6-10 to left side Lists 6-10 to right side Noise to right side Noise to left side BKB sentences Block 1 Lists 1-6 to right side Lists 1-6 to left side (n=9 per group) Noise to left side Noise to right side Block 2 Lists 7-12 to left side Lists 7-12 to right side Noise to right side Noise to left side Data Analysis: The Shapiro-Wilk test of normality was applied to all data and nonparametric tests were used to compare groups and conditions when assumptions of normality were not met. Across-list comparisons were conducted using analysis of variance (ANOVA) or Friedman's Two-way Analysis of Variance by Ranks. Between-group comparisons were conducted using Mann-Whitney U tests. A p value <.05 was considered statistically significant. IBM SPSS statistics 21.0 version was used. The study was approved by the University of Auckland Human Participants Ethics Committee and all participants gave written informed consent. Results CNC words Median CNC scores were high and close to 100% for the study participants who had normal hearing and who were listening to CNC words at 65 db SPL in 60 db SPL of contralateral noise (REC and LEC conditions). Nonparametric analyses showed significant differences in speech scores between blocks (lists 1-5 and 6-10), U=6968.50, p=.010, groups, U=7269.00, p=0.035, language groups (monolingual, bilingual), X 2 (9)=27,97, p=.010 and lists, X 2 (9)=27.97, p=0.01. Side of presentation (Left-Right) also produced a significant difference

in scores, U=7096.50, p=.016. Pairwise comparisons showed that the greatest score difference was between Lists 3 and 10; once these two lists were removed there were no significant differences across lists, X 2 (7)=13.02, p=.071, or blocks, U=4845.50, p=.162. After Lists 3 and 10 were removed, Mann-Whitney tests continued to show differences between groups, U=4471.50, p=.020, r= -.16, and side of presentation, U=4363.00, p=.009, r= -.18, however the effect size (r) for these differences were small and hence these group and side differences are not considered clinically significant. The REC vs. LEC difference was less than 1% (M=97.79%, SD 2.72 for REC vs. M=98.54%, SD 2.22 for LEC). Tables 3 and 4 therefore present CNC normative data averaged across blocks and side of presentation, excluding CNC word lists 3 and 10. Linguistic background continued to have an influence on CNC scores (Table 4), with an intermediate effect size, U= 3685.50, p<.001, r=-.25, highlighting the importance of considering linguistic background when examining speech in noise scores. Table 3. Overall descriptive statistics for speech scores for CNC words (N=26) (% correct) and BKB sentences (N=18) (db signal to noise ratio, SNR) for equivalent lists, excluding Lists 3 and 10 for CNC words and excluding list pairs 5-6 and 9-10 for BKB sentences. For the spatially separated speech and noise conditions results are averaged across REC (right ear competing) and LEC (left ear competing) conditions. Test condition Test Mean SD Median Max Min IQR Spatially separated speech & noise CNCs (%) 98.16 2.50 98.00 100.00 82.00 2.00 BKBs (db) -2.70 1.90-3.00 1.60-6.00 2.50 SNF condition BKBs (db) 5.88 1.53 5.90 8.20 1.00 1.38 Table 4. Descriptive statistics (SD=standard deviation, IQR=interquartile range) for monolingual and bilingual participants for CNC words (% correct) and BKB sentences (db SNR). REC=right ear competing, LEC=left ear competing, SNF=signal and noise front. N Mean SD Median Max Min IQR CNC (%) REC/LEC BKB (db) REC/LEC BKB (db) SNF Monolingual 10 99.00 1.46 100.00 100.00 94.00 2.00 Bilingual 16 97.64 2.86 98.00 100.00 82.00 4.00 Monolingual 10-3.39 1.83-3.35 1.60-6.00 2.43 Bilingual 8-1.86 1.67-2.05 1.60-4.50 2.95 Monolingual 7 4.50 1.67 5.60 5.90 2.10 2.75 Bilingual 13 6.36 0.84 6.20 7.60 4.60 1.60

BKBA sentences There were no differences across blocks of sentences (lists 1-6 vs. 7-12) and groups for BKB SRT values. Differences for side of presentation were observed however, U=1123.50, p=.040. Pairwise comparisons to explore list differences revealed greatest differences for pairs 5-6 and 9-10. Once these two pairs of lists were removed there were no significant differences across list pairs, X2(18)=1.423, p=.700, however the significant difference due to side of presentation remained (REC vs. LEC), U=1123.50, p=.040, r=-.19, albeit with a small effect size. As was the case for CNCs there was a small listening advantage ( 0.9 db) for speech presented to the left side with noise on the right side (-2.27 db SRT, SD 1.97 for REC vs. -3.15 db SRT, SD 1.77 for LEC). As was the case for CNC words, There was a significant difference in scores across linguistic backgrounds, U= 324.50. p<.001, r=-.42 (Table 4), with a large size effect. Monolingual participants had better SRTs (more negative, Mdn=-3.35 db) than bilingual participants (Mdn=-2.05 db). For sentences presented at the same location as the noise (frontal, SNF condition), there were no list differences, X 2 (2)=1.425, p=.491, however there was a difference between linguistic backgrounds, U=195.50, p=.001, r=-.42. Monolinguals had better SRT scores (Mdn=5.60 db) than bilinguals (Mdn=6.20 db) for the SNF condition, as was the case for the REC/LEC conditions (Table 4). Discussion Overall normal hearing adults had high CNC word scores, consistent with the favourable presentation level (65 db SPL), positive SNR (+5 db) and the spatial separation of the speech and noise (90 separation between speech and noise loudspeakers for REC and REC conditions). BKB sentence speech recognition thresholds showed the expected improvement with spatial separation of the speech and noise (by 8.58 db on average), when the SNF and REC/LEC conditions are compared. Scores are much poorer (i.e., higher SRTs) for the SNF condition in which participants have no access to binaural separation cues (e.g. head shadow) to assist with speech perception in noise (Arbogast, Mason, & Kidd, 2005; Hawley, Litovsky, & Culling, 2004; Noble & Perrett, 2002). Using a similar test protocol, Ruscetta et al. (2005) reported SRTs for sentences in noise of -4.23 db (spatially separated condition) for children with normal hearing. Rothpletz et al. (2012) reported much better SRTs for normal hearing adults performing a closed-set task where the listener had to attend to a target phrase while ignoring a similar masker phrase. For this paradigm the average SRT was -20.5 db when the target and masker were spatially separated (0 and 90 ). For all speech conditions, linguistic background affected scores. Bilingual participants had poorer scores than monolingual New Zealand English participants despite the bilinguals having high proficiency in English (English language acquisition generally before 6 years, but by 10 years of age for some participants). Children and adults who are non-native English speakers perform more poorly than native English speakers on speech perception tests, particularly in noise (Crandell & Smaldino, 1996; Nábělek & Donahue, 1984; Takata & Nábělek, 1990). Von Hapsburg and Pena (2002) reviewed the evidence for effects of

bilingualism in speech audiometry and found that early bilinguals performed differently from late bilinguals (English language exposure after seven years of age). In favourable conditions, in quiet, bilinguals performed the same as monolingual subjects but in degraded listening conditions, in the presence of background noise, performance was better when English was learnt early (before six years of age) compared to later in life (Carlo, 2009). Consistent with this, Mendel and Widner (2015) recently reported that bilingual Spanish/English normal hearing participants had poorer SRT scores compared to monolingual English speakers and were similar to people with hearing loss. According to bottom-up processing theory, noisy environments restrict access to phonetic features of speech. As bilingual speakers do not the same phonetic inventories as monolinguals, speech in noise recognition performance is more affected for this population (Laing & Kamhi, 2003). Due to the linguistic heterogeneity of the New Zealand population (Statistics New Zealand, 2013) it is recommended that composite normative scores (including bilingual and monolingual listeners, Table 3) are used clinically when evaluating the performance of adult New Zealanders with hearing loss against these norms, unless the clinical population being evaluated consists of monolingual New Zealand English speakers. References Arbogast, T. L., Mason, C. R., & Kidd, G. (2005). The effect of spatial separation on informational masking of speech in normal-hearing and hearing-impaired listeners. The Journal of the Acoustical Society of America, 117(4), 2169-2180. Bench, J., Doyle, J., & Greenwood, K. (1987). A standardisation of the BKB/A sentence test for children in comparison with the NAL-CID sentence test and CAL-PBM word test. Australian Journal of Audiology, 9, 39-48. Bench, J., & Doyle, J. (1979). The bamford-kowal-bench/australian version (BKB/A) standard sentence lists. Carlton, Victoria: Lincoln Institute, Bench, J., Kowal, Å, & Bamford, J. (1979). The BKB (bamford-kowal-bench) sentence lists for partially-hearing children. British Journal of Audiology, 13(3), 108-112. Bontrager, T. (1991). The development of word frequency lists prior to the 1944 Thorndike Lorge list. Reading Psychology: An International Quarterly, 12(2), 91-116. Carlo, M. A. (2009). A review of the effects of bilingualism on speech recognition performance. SIG 6 Perspectives on Hearing and Hearing Disorders: Research and Diagnostics, 13(1), 14-20.

Crandell, C. C., & Smaldino, J. J. (1996). Speech perception in noise by children for whom english is a second language. American Journal of Audiology, 5(3), 47-51. Etymotic Research. (2005). BKB-SINTM speech-in-noise test version 1.03 [CD]. Elk Grove Village, IL: Etymotic Research. Hawley, M. L., Litovsky, R. Y., & Culling, J. F. (2004). The benefit of binaural hearing in a cocktail party: Effect of location and type of interferer. The Journal of the Acoustical Society of America, 115(2), 833-843. Jerger, J., & Musiek, F. (2000). Report of the consensus conference on the diagnosis of auditory processing. Journal of the American Academy of Audiology, 11(9), 467-474. Killion, M. C., & Niquette, P. A. (2000). What can the pure-tone audiogram tell us about a patient's SNR loss? The Hearing Journal, 53(3), 46-48. Killion, M. C., Niquette, P. A., Gudmundsen, G. I., Revit, L. J., & Banerjee, S. (2004). Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners. The Journal of the Acoustical Society of America, 116(4), 2395-2405. Kim, J. H., & Purdy, S. C. (2014). Speech perception assessments new zealand (SPANZ). New Zealand Audiological Society Bulletin, 24(1), 9-16. Laing, S. P., & Kamhi, A. (2003). Alternative assessment of language and literacy in culturally and linguistically diverse populations. Language, Speech, and Hearing Services in Schools, 34(1), 44-55. Lehiste, I., & Peterson, G. E. (1959). Linguistic considerations in the study of speech intelligibility. The Journal of the Acoustical Society of America, 31(3), 280-286. Luxford, W. M. (2001). Minimum speech test battery for postlingually deafened adult cochlear implant patients. Otolaryngology - Head and Neck Surgery, 124(2), 125-126. doi:10.1067/mhn.2001.113035 Mackersie, C. L. (2002). Tests of speech perception abilities. Current Opinion in Otolaryngology & Head and Neck Surgery, 10(5), 392-397. doi:10.1097/00020840-200210000-00012 Mendel, L. L., & Widner, H. (2015). Speech perception in noise for bilingual listeners with normal hearing. International Journal of Audiology, (ahead-of-print), 1-8.

Nábělek, A. K., & Donahue, A. M. (1984). Perception of consonants in reverberation by native and non native listeners. The Journal of the Acoustical Society of America, 75(2), 632-634. Nilsson, M., Soli, S. D., & Sullivan, J. A. (1994). Development of the hearing in noise test for the measurement of speech reception thresholds in quiet and in noise. The Journal of the Acoustical Society of America, 95(2), 1085-1099. Noble, W., & Perrett, S. (2002). Hearing speech against spatially separate competing speech versus competing noise. Perception & Psychophysics, 64(8), 1325-1336. Peterson, G. E., & Lehiste, I. (1962). Revised CNC lists for auditory tests. Journal of Speech and Hearing Disorders, 27(1), 62. Plomp, R., & Mimpen, A. (1979). Improving the reliability of testing the speech reception threshold for sentences. International Journal of Audiology, 18(1), 43-52. Rothpletz, A. M., Wightman, F. L., & Kistler, D. J. (2012). Informational masking and spatial hearing in listeners with and without unilateral hearing loss. Journal of Speech, Language and Hearing Research, 55(2), 511. Ruscetta, M. N., Arjmand, E. M., & Pratt, S. R. (2005). Speech recognition abilities in noise for children with severe-to-profound unilateral hearing impairment. International Journal of Pediatric Otorhinolaryngology, 69(6), 771-779. Statistics New Zealand. (2013). 2013 census totals by topic language-spoken tables. Retrieved from http://www.stats.govt.nz/browse_for_stats/people_and_communities/language.aspx Takata, Y., & Nábělek, A. K. (1990). English consonant recognition in noise and in reverberation by japanese and american listeners. The Journal of the Acoustical Society of America, 88(2), 663-666. Taylor, B. (2003). Speech in noise tests: How and why to include them in your basic test battery. The Hearing Journal, 56(1), 40-42. Von Hapsburg, D., & Pena, E. D. (2002). Understanding bilingualism and its impact on speech audiometry. Journal of Speech, Language, and Hearing Research, 45(1), 202-213. Wilson, R. H. (2003). Development of a speech-in-multitalker-babble paradigm to assess word-recognition performance. Journal of the American Academy of Audiology, 14(9), 453-470.