SLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32

Similar documents
Speech Generation and Perception

Topics in Linguistic Theory: Laboratory Phonology Spring 2007

Speech (Sound) Processing

Chapter 11: Sound, The Auditory System, and Pitch Perception

Linguistic Phonetics Fall 2005

Hearing Sound. The Human Auditory System. The Outer Ear. Music 170: The Ear

Music 170: The Ear. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) November 17, 2016

Auditory System. Barb Rohrer (SEI )

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

Auditory Physiology Richard M. Costanzo, Ph.D.

Hearing. and other senses

HEARING. Structure and Function

Required Slide. Session Objectives

Auditory Physiology PSY 310 Greg Francis. Lecture 29. Hearing

PSY 310: Sensory and Perceptual Processes 1

Before we talk about the auditory system we will talk about the sound and waves

L2: Speech production and perception Anatomy of the speech organs Models of speech production Anatomy of the ear Auditory psychophysics

Deafness and hearing impairment

ID# Exam 2 PS 325, Fall 2003

MECHANISM OF HEARING

Intro to Audition & Hearing

whether or not the fundamental is actually present.

Sound and Hearing. Decibels. Frequency Coding & Localization 1. Everything is vibration. The universe is made of waves.

ENT 318 Artificial Organs Physiology of Ear

COM3502/4502/6502 SPEECH PROCESSING

SPECIAL SENSES: THE AUDITORY SYSTEM

Place and Manner of Articulation Sounds in English. Dr. Bushra Ni ma

Auditory System Feedback

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Receptors / physiology

Digital Speech and Audio Processing Spring

Overview. Acoustics of Speech and Hearing. Source-Filter Model. Source-Filter Model. Turbulence Take 2. Turbulence

Speech Spectra and Spectrograms

Issues faced by people with a Sensorineural Hearing Loss

Systems Neuroscience Oct. 16, Auditory system. http:

Unit VIII Problem 9 Physiology: Hearing

11 Music and Speech Perception

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

Anatomy and Physiology of Hearing

Cranial Nerves VII to XII

Hearing: Physiology and Psychoacoustics

It is important to understand as to how do we hear sounds. There is air all around us. The air carries the sound waves but it is below 20Hz that our

Chapter 13 Physics of the Ear and Hearing

SUBJECT: Physics TEACHER: Mr. S. Campbell DATE: 15/1/2017 GRADE: DURATION: 1 wk GENERAL TOPIC: The Physics Of Hearing

SPHSC 462 HEARING DEVELOPMENT. Overview Review of Hearing Science Introduction

Discrete Signal Processing

ID# Final Exam PS325, Fall 1997

Topic 4. Pitch & Frequency

PSY 215 Lecture 10 Topic: Hearing Chapter 7, pages

Perception of Sound. To hear sound, your ear has to do three basic things:

BCS 221: Auditory Perception BCS 521 & PSY 221

Sound and its characteristics. The decibel scale. Structure and function of the ear. Békésy s theory. Molecular basis of hair cell function.

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

HCS 7367 Speech Perception

INTRODUCTION TO AUDIOLOGY Hearing Balance Tinnitus - Treatment

Hearing. By Jack & Tori

The Ear. The ear can be divided into three major parts: the outer ear, the middle ear and the inner ear.

Hearing. istockphoto/thinkstock

Hearing. Juan P Bello

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music)

College of Medicine Dept. of Medical physics Physics of ear and hearing /CH

o Spectrogram: Laterals have weak formants around 250, 1200, and 2400 Hz.

Sound. Audition. Physics of Sound. Properties of sound. Perception of sound works the same way as light.

Audition. Sound. Physics of Sound. Perception of sound works the same way as light.

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

HEARING AND PSYCHOACOUSTICS

THE EAR AND HEARING Be sure you have read and understand Chapter 16 before beginning this lab. INTRODUCTION: hair cells outer ear tympanic membrane

HEARING GUIDE PREPARED FOR CLINICAL PROFESSIONALS HEARING.HEALTH.MIL. HCE_ClinicalProvider-Flip_FINAL01.indb 1

9.01 Introduction to Neuroscience Fall 2007

Hearing I: Sound & The Ear

ID# Exam 2 PS 325, Fall 2009

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves

Lecture 3: Perception

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

Mechanical Properties of the Cochlea. Reading: Yost Ch. 7

Lecture 4: Auditory Perception. Why study perception?

Presentation On SENSATION. Prof- Mrs.Kuldeep Kaur

Acoustics Research Institute

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

Hearing I: Sound & The Ear

Chapter 17, Part 2! The Special Senses! Hearing and Equilibrium!

Chapter 17, Part 2! Chapter 17 Part 2 Special Senses! The Special Senses! Hearing and Equilibrium!

What is sound? Range of Human Hearing. Sound Waveforms. Speech Acoustics 5/14/2016. The Ear. Threshold of Hearing Weighting

Class Voice: Review of Chapter 10 Voice Quality and Resonance

Lecture 6 Hearing 1. Raghav Rajan Bio 354 Neurobiology 2 January 28th All lecture material from the following links unless otherwise mentioned:

to vibrate the fluid. The ossicles amplify the pressure. The surface area of the oval window is

Auditory System & Hearing

The speed at which it travels is a function of the density of the conducting medium.

Sound and the auditory system

Learning Targets. Module 20. Hearing Explain how the ear transforms sound energy into neural messages.

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Human Acoustic Processing

Printable version - Hearing - OpenLearn - The Open University

NURSE-UP RESPIRATORY SYSTEM

Hearing Lectures. Acoustics of Speech and Hearing. Subjective/Objective (recap) Loudness Overview. Sinusoids through ear. Facts about Loudness

Week 2 Systems (& a bit more about db)

Transcription:

SLHS 1301 The Physics and Biology of Spoken Language Practice Exam 2 Chapter 9 1. In analog-to-digital conversion, quantization of the signal means that a) small differences in signal amplitude over time are summed b) small differences in signal amplitude over time are disregarded c) small differences in signal amplitude over time are subtracted d) small differences in signal frequency over time are disregarded 2. An analysis of the waveform for a vowel reveals that the duration of five fundamental periods is 40 ms. What is your best estimate of the fundamental frequency? a) 25 Hz b) 125 Hz c) 250 Hz d) 125 Hz e) Insufficient information to calculate f 0. 3. If sampling frequency (f s ) is 500 Hz, a) f s = 5 khz b) T s = 0.005 s c) T s = 2 ms d) T s (sampling period) cannot be determined. 4. Which of the following is TRUE? a) A spectrum is a plot of frequency as a function of time. b) A spectrogram is a plot of amplitude as a function of time. c) A waveform is a plot of amplitude as a function of frequency. d) If a signal contains no frequency components above 7 khz, its digitization needs a sampling rate of no less than 14 khz for high quality recording. e) All of the above. 5. Which of the following statements is TRUE? a) Digital signals are always better than analog signals. b) Digital means processing information by using electronic devices. c) A digital filter selectively processes frequency information in a signal. d) Given that human speech has few frequency components above 7 khz, digitization of speech signals needs a sampling rate of at least 3.5 khz for high quality recording. e) All of the above. 6. What is the relationship between bit and byte? a) 1 bit = 1 byte b) 1 bit = 8 bytes c) 1 bit = 1/8 byte d) 1 bit = 10 bytes 7. What are the maximum allocated memory addresses (bytes) in a 32-bit computer? a) 32 billion b) 2 32 c) log 2 32 d) 10 32 1

8. If the sampling frequency (Fs) in analog-to-digital conversion = 10,000 Hz, Ts = a).0001 sec b) 1 x 10-4 sec c).1 msec d) all of the above 9. According to Nyquist s sampling theorem a) Fs should be at least ten times higher than the highest frequency of interest b) Fs should be at least two times higher than the highest frequency of interest c) Fs should be equal to the highest frequency of interest d) Fs should be always as high as you can possibly make it 10. Which of the following is False? a) Electronic devices are all digital because they use electricity. b) Digital signal processors typically involve mathematical operations such as amplification, filtering, spectrum analysis, automatic synthesis and recognition. c) If Fs < 2 Fn, aliasing occurs, which causes distortion of the signal. d) Quantization error refers to the difference between the digital signal and the sample values in digital processing. 11. Which of the following does not properly characterize the purposes of speech coding and speech compression? a) To achieve high quality sound at higher sampling rates b) To carry more messages within limited bandwidth c) To optimize signal quality with low sampling rates and limited bandwidth d) To process speech signals for cost-effective communication 12. Which of the following is used in digital speech analysis? a) FFT b) LPC c) Filtering d) All of the above. 13. Which of the following descriptions is false regarding digital spectrum analysis? a) It allows researchers to find out the essential properties of the analyzed signal. b) It allows realtime generation of the spectra and spectrographs of the signal as it is being produced. c) It uses digital to analog conversion. d) It is an integral component of spoken language technology. 14. Which of the following does not use digital technology? a) Dell computers b) Cingular cell phones c) High definition television d) CDs and DVDs e) Cassette-tape answering machine 15. ADC in digital technology stands for. a) Advanced Digital Computing b) Analog to Digital Converter c) Digital to Analog Converter d) Analysis of Direct Current 16. SNR in digital processing stands for a) Signal to Noise Ratio b) Speech to Noise Ratio 2

c) Spectrum of Noisy Resonance d) Sampling Noise Reduction Chapter 4 1. If the vocal folds open and close 180 times per second during vowel production, fo of the resulting sound wave is a) dependent on whether the talker is a male or a female b) 125 Hz c) 180 Hz d) none of the above 2. Which of the following cartilages form the larynx? a) hyoid (front), epiglottis (back), cricoid (bottom) b) thyroid (front), arytenoid (back), cricoid (bottom) c) glottis, epiglottis, velum d) hard palate, soft palate, glottis 3. Which of the following statements is TRUE? a) If f0 = 100 Hz, the second formant = 200 Hz. b) The vocal tract serves as a resonator/filter by changing the harmonic amplitudes (not the frequencies) of the buzz sound produced by the vocal folds. c) Voiceless sounds refer to the sounds that we cannot hear. d) All speech sounds are produced by vibrating vocal folds. 4. Which of the following is not a bilabial sound? a) /b/ b) /p/ c) /m/ d) /f/ 5. Production of the phoneme /u/ requires a) raising the front part of the tongue b) lowering the front part of the tongue c) raising the back part of the tongue d) lowering the jaw with additional lip rounding 6. Production of the phoneme /a/ in English requires a) raising the front part of the tongue b) lowering the front part of the tongue c) raising the back part of the tongue d) lowering the jaw and the tongue. 7. What structure refers to the throat? a) Pharynx b) Oral cavity c) Vocal tract d) Epiglottis e) Nasal cavity 8. During sustained articulation for a whispered vowel, a) abductor and adductor muscles alternately contract to open and close the glottis b) contraction of abductor muscles is sustained throughout the vowel 3

c) contraction of adductor muscles is sustained throughout the vowel d) neither abductor nor adductor muscles is contracted 9. Which of the following is TRUE? a) The principal vocal organs include the lungs, the trachea, the larynx, the pharynx, the nose, the jaw, the tongue, and the mouth. b) The fundamental frequency is controlled by the mass, length and tension of the vocal tract. c) The vocal tract consists of the pharyngeal cavity and the oral cavity, but not the nasal cavity. d) Vocal organs are solely devoted to speech production. 10. Which of the following statements is TRUE? a) Stops, fricatives, approximants and nasals differ from each other in place of articulation. b) One main difference between consonants and vowels lies in articulatory constriction. c) Consonants are typically longer in duration and higher in energy than vowels. d) Whispered speech can be understood because it carries the f0 and formant information. 11. Which of the following intrinsic laryngeal muscles compose the main body of the vocal folds? a) cricothyroid b) posterior cricoarytenoid c) interarytenoids d) thyroarytenoid 12. The amplitudes of the harmonics with increasing frequency in the spectrum for the buzz sound produced by the vocal folds. a) decrease b) increase c) remain unchanged d) saturate 13. If formant frequencies of a vowel are held constant but its fo changes appropriately, a) the vowel will remain the same, but perceived pitch will change b) the perceived pitch will remain the same, but the vowel will change c) both perceived pitch and the vowel will change d) neither the perceived pitch nor the vowel will change 14. When the sound wave produced by the vibrating vocal folds excites the air-filled vocal tract, a) the frequencies of the harmonics are changed b) the amplitudes of the harmonics are changed c) the amplitudes and frequencies of the harmonics are changed d) neither the amplitudes nor the frequencies of the harmonics are changed 15. The resonant frequencies of the vocal tract are determined by a) the amplitude of vocal fold vibration b) the frequency of vocal fold vibration c) the amplitude and frequency of vocal fold vibration d) the size and shape of the vocal tract e) None of the above. 16. What class of speech sounds is produced when the vocal folds remain abducted and turbulence is created at the point of constriction within the vocal tract? a) voiced fricatives b) vowel plosives c) voiceless fricatives d) voiced affricates 4

17. Suppose a given vowel has f0 = 100 Hz, F1 = 405 Hz, F2 = 2002 Hz. a) The two lowest resonances in the vocal tract are close to the 4 th and 20 th harmonics. b) The resonances in the vocal tract have only two components: F1 and F2. c) This vowel cannot exist because the fundamental frequency = F2 F1 = 1600 Hz. d) According to Fourier analysis, this vowel has exactly three sinewave components at 100 Hz, 400 Hz, and 2000 Hz. 18. Voice, as we know it, results from three components: voiced sound, resonance, and articulation. Which of the following is TRUE? a) Voiced sound is amplified and modified by the vocal tract resonators. b) Vibratory Cycle in vocal folds = Open + Close Phase c) Breakdowns can happen to the air pressure system, the vibratory system, and the resonating system, creating various voice disorder symptoms. d) All of the above. Chapter 5 1. Which of the following is a unit of measurement for loudness? a) db SPL b) Sone c) Hz d) Mel e) db IL 2. The three bones in the middle ear are. a) meatus, incus, and stapes b) scala vestibuli, scala media, and scala tympani c) cricoid, thyroid, and arytenoid d) malleus, incus, and stapes e) outer bone, middle bone, inner bone 3. Which of the following is TRUE? a) Minimum audibility = absolute threshold of hearing b) 1 db SPL = 1 sone c) 1 db SPL = 1 phon d) 1 Hz = 1 mel e) 1 sone = 1 phon 4. Which of the following is FALSE? a) The amount of masking depends on the intensity, spectrum and temporal characteristics of the masker. b) When the intensity of the sound gets stronger on the right ear, the perceived auditory image moves to the right ear. c) Auditory localization in space is accomplished by resolving interaural intensity and time differences. d) A signal with frequency components of 240 Hz, 360 Hz, 480 Hz, and 600 Hz has a fundamental frequency of 240 Hz. 5. The waveform of a sound signal displays a) amplitude, frequency, and duration b) amplitude as a function of time c) frequency as a function of time d) amplitude as a function of frequency e) frequency and amplitude as a function of time 5

6. When a sound wave sets the tympanic membrane into vibration, the membrane vibrates a) independently of the frequency of the sound wave b) at the natural frequency of the tympanic membrane c) at a frequency determined by the mass and stiffness of the tympanic membrane d) at the frequency of the applied force 7. A major function of the middle ear is to a) maintain equilibrium and balance. b) keep infections contained so that the inner ear will not be contaminated. c) serve as an amplifier. d) cause the tympanic membrane to vibrate. 8. The hair cells inside the Organ of Corti are located on. a) tympanic membrane b) basilar membrane c) tectorial membrane d) Reissner s membrane e) diaphragm membrane 9. The amplitude of the traveling wave on the basilar membrane for a sinusoid is a) greatest near the basal end for low frequencies b) greatest near the apical end for high frequencies c) greatest near the apical end for low frequencies d) approximately constant throughout its length e) independent of sound frequency 10. Neural potentials are generated in the auditory system when a) the tympanic membrane is forced inward b) shearing forces on the cilia of the hair cells stimulate nerve fibers c) the tectorial membrane rises to make contact with Reissner s membrane d) the outer hair cells move towards the inner hair cells e) sound reaches the brain 11. The minimum audibility curve, averaged for a large group of listeners with normal hearing, informs us that a) the auditory system is most sensitive in a mid-frequency range b) the auditory system is most sensitive for frequencies below 1000 Hz c) the auditory system is most sensitive for frequencies above 5000 Hz d) the auditory system is equally sensitive from 20 Hz to 20,000 Hz e) we cannot never hear a sound below 0 db 12. The unit of measure for perceived (or subjective) pitch is a) mel b) phon c) Hz d) sone 13. Auditory localization in space is accomplished by resolving a) interaural time differences and interaural intensity differences b) interaural intensive differences c) interaural time differences d) visual localization of the source of sound 14. When two identical sinusoids are presented binaurally to listeners under earphones, the listeners hear a single fused image within the cranium in the median plane. The image will move toward the left ear if a) the signal to the right ear lags the signal to the left ear 6

b) the intensity of the signal to the right is increased c) the signal to the right ear leads the signal to the left ear d) the intensity of the signal to the left remains the same 15. A signal is presented to a listener in the presence of a masking noise that fluctuates in intensity. The amount of masking that will be produced depends on a) the intensity of the masker b) the spectrum of the masker c) the temporal characteristics of the masker d) all of the above 16. A major function of the outer ear is to a) collect and carry sound to the middle ear. b) perform Fourier analysis on sounds. c) convert sound into nerve pulses. d) reduce the mechanical vibrations of a sound to protect the middle ear. 17. Which of the following is FALSE? a) The human inner ear is where sound waves are amplified by means of the vibrations of tiny bones. b) The Eustachian tube connects the middle ear and throat. c) In the cochlea, the hair cells are contained by the basilar membrane. d) The auditory pathway includes organ of corti, cochlea nerve, spiral ganglion, cochlea nucleus, superior olive, inferior colliculus, and auditory cortex. 18. Which of the following factor(s) can contribute to hearing loss? a) head injury b) listening to very loud music and sounds, especially through headphones c) ototoxic medication d) All of the above Answers Chapter 9 1. b 2. b. 3. c 4. d 5. c 6. c 7. b 8. d 9. b. 10. a 11. a 12. d 13. c 14.e 15. b 16. a Chapter 4 1.c 2. b 3. b 4. d 5. c 6. d 7. a 8. b 9. a 10. b 11. d 12. a 13. a 14. b 15. d 16. c 17. a 18.d Chapter 5 1.b 2. d 3. a 4. d 5. b 6. d 7. c 8. b 9. b 10. b 11. a 12. a 13. a 14. a 15. d 16. a 17. a 18.d 7