Does Wernicke's Aphasia necessitate pure word deafness? Or the other way around? Or can they be independent? Or is that completely uncertain yet? Two types of AVA: 1. Deficit at the prephonemic level and is related to the inability to comprehend rapid changes in sound. This form of AVA is associated with bilateral temporal lobe lesions. 2. Deficit in linguistic discrimination that does not adhere to a prephonemic pattern. This form is associated with left unilateral temporal lobe lesions and may even be considered a form of Wernicke's aphasia.
How can individuals with Pure Word Deafness have clear and intact speech production if they are unable to comprehend language? Hypothesis 1: early stage of auditory analysis is impaired. The semantic system and the speech output lexicon are intact (hence they can read). Hypotheses 2: there is either a complete or partial disconnection of the auditory input lexicon from the semantic system. If the sounds they hear are not processed as language, how can they themselves create sounds with definitions?
Why is pure word deafness considered a prelanguage syndrome, but phonagnosia is not?
"The binding problem" and an analogous issue in the visual system. However, currently, it is generally assumed in the visual system that there is no need to recombine. Is there any evidence from the auditory system that might support the theory that recombination happens?
"The binding problem" and an analogous issue in the visual system. However, currently, it is generally assumed in the visual system that there is no need to recombine. Is there any evidence from the auditory system that might support the theory that recombination happens? Frequency; that is, the pitch of sounds goes up or down. The amplitude of a sound determines its volume (loudness). Tone is a measure of the quality of a sound wave.
Are there any disorders that people can have which makes it difficult to understand sine wave speech?
Is there any data on how we process languages that aren't our own? Specifically languages with different phonological features, that can't be mistaken for nonsense words in our own language. Would those languages still be processed as "speech" or simply sound? Does the McGurk effect work on phonemes that we are unfamiliar with?
The chapter on phoneme perception mentions that speakers of English may not recognize the difference between dentoalveolar and post-alveolar sounds which speakers of Indo-Aryan languages may be able to (pp. 447). This observation is attributed to the "tuning" of the brain during early development. What studies have been conducted on this tuning process or early language development? Decrease in nonnative consonant perception occurs between 8 and 12 months of age
What changes might occur in the brain when a person learns a new language and begins to distinguish between phonemes that aren't usually distinguished in her/his native language?
In lecture we learned that subjects who were told that sinewave stimuli were speech sounds had increased activity in the left superior temporal cortex. Is there a neurobiological process or switch that gets "turned on" when the subject is initially told that the sinewave stimuli are speech sounds, even before they hear the stimulus?
How people process foreign languages that they have never heard before or didn't know existed. Would the brain response for a foreign language be similar to that for gibberish/ nonspeech stimuli?
Given that spoken speech encodes so much information about the speaker (fundamental frequency, voice quality, timbre, speed and accent), do listeners with, for example Pure Word Deafness or Auditory Agnosia also struggle with tasks regarding recognition of such embedded information?
I was wondering if the auditory cortex is activated in deaf signers when processing sign language and if so, if the activity correlates specifically with phonological aspects of sign. http://www.nature.com/neuro/journal/v4/n12/abs/nn763.html
There s evidence that motor areas can be activated through speech perception, however, such activity may be due to the recruitment of brain areas related to working memory, cognitive control... Is it possible that mirror neurons exist for motor commands related to the articulation aspect of speech perception and production? How important, therefore, is the articulation aspect of speech production across different languages? Are there languages which evoke more of mirror neuron-like response in motor areas (due to articulation) than others? Also, in which way is this phenomenon variable across individuals and, especially, in people with Autism who may fixate less on motor and social cues?
With regard to articulate gestures, when some people misinterpret "ba" for "da", is that a result of the subject's past memory associating what they see as "da" or is the difference simply the amount of focus they put "ga"?
How much do we process our own language when we speak? In what ways does hearing our own voice affect our intonation and ability to articulate? The article we read said that intonation, affect, etc, are independent from spoken language. But which do we process in order to speak normally, our own intonation or our actual language? Does a person with, for example, Type 2 pure word deafness speak completely normally since they are able to process intonation, affect, etc?
When sounds were presented as speech, they were indistinguishable by a speaker of a language where the sounds were not distinct phonemes. When they were presented as drops in a bucket, the listener was able to identify them as different. Would this effect be based on the same neural mechanisms as the difference in perception of sine-wave speech when it's presented as speech vs non-speech?
Which part of the brain, subcortically and in the left hemisphere, would a lesion reside so as to sever both ipsilateral and contralateral projections to the Wernicke's area?
Why are the double-dissociation findings that seemingly point to the modularity of the auditory system still dismissed or ruled out by those who uphold centralist theories? Aren't the lesion findings enough to postulate that at least certain parts of the brain perform certain auditory tasks? How could/why would a centralist dispute that?
The Polster and Rose reading talks about how there are 2 types of impairments that researchers look when looking into pure word deafness, but then it seems to indicate that pure word deafness could be both types together, which is where e started. So, my question is, why do researchers continue to go in loops in the research of it and not look at the symptoms through a different lens that would allow for a more fitting scope? Is there compelling evidence to support the looping?