Comparison of Oral and Computerized Versions of the Word Memory Test

Size: px
Start display at page:

Download "Comparison of Oral and Computerized Versions of the Word Memory Test"

Transcription

1 Archives of Clinical Neuropsychology 25 (2010) Comparison of Oral and Computerized s of the Word Memory Test Abstract Laura L. Hoskins 1, *, Laurence M. Binder 2, Naomi S. Chaytor 3,4, David J. Williamson 5,6, Daniel L. Drane 4,7 1 Department of Psychiatry, North Shore-Long Island Jewish Health System, Manhasset, NY 11030, USA 2 Independent Practice, Beaverton, OR 97005, USA 3 Harborview Medical Center, University of Washington Regional Epilepsy Center, Seattle, WA 98104, USA 4 Department of Neurology, University of Washington School of Medicine, Seattle, WA 98195, USA 5 Ortho-McNeil Janssen Scientific Affairs, LLC, Raritan, NJ 08869, USA 6 Department of Neurology, University of South Alabama, Mobile, AL 36688, USA 7 Department of Neurology, Emory University School of Medicine, Atlanta, GA 30322, USA *Corresponding author at: Department of Psychiatry, North Shore University Hospital, 400 Community Drive, Manhasset, NY 11030, USA. Tel.: ; fax: address: lhoskins@nshs.edu (L. L. Hoskins). Accepted 10 July 2010 A computer-administered version of the Word Memory Test (WMT) was compared with the orally administered version in two clinical samples to assess equivalency of the two versions. The two samples included inpatients at an epilepsy center (n ¼ 67) and forensic and clinical referrals to a private practice (n ¼ 58). A randomized procedure was used to assign participants to either version of the WMT. Only the results of the WMT primary effort measures were analyzed. Between-group comparisons of the WMT effort measures were conducted using Mann Whitney nonparametric analysis. No significant differences were found between versions for several diagnostic subgroups. The data generally support equivalency of the orally administered version and the computerized version of the WMT effort measures in a mixed outpatient sample. Keywords: Word Memory Test (WMT); Symptom validity testing; Computerized assessments; Neuropsychological assessment The Word Memory Test (WMT; Green, Allen, & Astner, 1996) is a test of verbal learning and memory, which can be administered orally or self-administered by computer, designed to allow detection of suboptimal effort during testing (Green, Iverson, & Allen, 1999). In addition to measuring the examinee s ability to learn and remember word pairs, the WMT incorporates several indices of response bias (Wynkoop & Denney, 2005). The Immediate Recognition (IR), Delayed Recognition (DR), and Consistency (CNS) subtests are considered to be the primary effort measures. In prior studies, the WMT was sensitive to suboptimal effort, and it appears to be relatively insensitive to genuine neuropsychological impairment (Goodrich-Hunsaker & Hopkins, 2009; Green, Flaro, & Courtney, 2009; Green et al., 1999; Green, Lees-Haley, & Allen, 2002). The WMT was originally developed as an orally administered test with an examiner reading the list of 20 word pairs (Green & Astner, 1995) and later adapted for computerized administration in a DOS environment utilizing visual presentation of the word pairs. The original computerized version was published by CogniSyst (Green et al., 1996). More recently, another computerized version programmed for the Windows environment was published by the test author, Green (2003). The computerized version offers the advantages of efficiency because of self-administration and automated scoring, but the A version of this paper was presented as a poster at the 35th annual meeting of the International Neuropsychological Society in Portland, Oregon. # The Author Published by Oxford University Press. All rights reserved. For permissions, please journals.permissions@oxfordjournals.org. doi: /arclin/acq060 Advance Access publication on 18 August 2010

2 592 L.L. Hoskins et al. / Archives of Clinical Neuropsychology 25 (2010) oral version of the WMT may be preferred by some examiners who choose to avoid the use of computers for test administration for a variety of reasons. The availability, cost, and portability of necessary equipment may render computerized administration prohibitive. Although increased portability of computerized versions of traditional paper-and-pencil tests may be now possible via laptop computer, bedside computerized testing of acute medical/neurological patients may remain unwieldy. Computerized administrations of neuropsychological tests typically require that an examinee is physically able to utilize a keyboard or manipulate and click a mouse in order to record responses, which may be severely limiting to individuals with physical or motor impairments. Computers may not be feasible or allowed in certain circumstances, such as forensic testing in a prison or other environment with increased security. Additional potential advantages of an examiner-administered assessment include an increased ability to observe an examinee s test-taking approach and to modify test administration for purposes of testing the limits (Agnew, Schwartz, Bolla, Ford, & Bleecker, 1991; Schatz & Browndyke, 2002). Finally, examiners may select traditional paper-and-pencil versions over computerized adaptations out of concern for an examinee s aversion to or lack of exposure to computers. This later concern may be particularly salient for computer-adapted symptom validity tests (SVTs); attorneys could attribute failing WMT scores to confusion about the computer interface rather than suboptimal effort. Computers have been increasingly used for the administration of psychological and neuropsychological measures (Campbell et al., 1999; Mead & Dragsow, 1993; Russell, 2000), and several traditional paper-and-pencil neuropsychological tests have been computerized as the newer electronic versions provide many advantages over the examiner-administered paper-and-pencil tests. These have been discussed elsewhere (Campbell et al., 1999; Fortuny & Heaton, 1996; Lichtenberger, 2006; Kane & Kay, 1992; Schatz & Browndyke, 2002), but some of the inherent benefits of computer-based assessment include increased rigorous standardization in administration and scoring, decreased examiner influence, automated comparison to normative databases and analysis of response patterns, and transfer of scores to database for storage. While the relative advantages and disadvantages of traditional examiner- and computer-administered assessments must always be carefully weighed during the process of test selection with special consideration of the type of data to be collected and the purpose of the evaluation, the examiner should first examine how modification of the traditional format into a computerized version affects the validity of a particular test and consider whether the two versions are interchangeable. The increasing adaptation of examiner-administered neuropsychological tests to a computer-based format and utilization of computerized versions necessitate that the equivalency of computerized and noncomputerized versions of the same tests be established. Modification of traditional tests for computerized administration might introduce differences in task demand and response method that could potentially alter the sensitivity of a test to various examinee characteristics (Agnew et al., 1991); thus, differentially impacting neuropsychological test results depending on administration format and affecting the comparability of test versions. If equivalency is not established, one cannot assume that the results on one version are comparable with the results obtained from the other version. Comparisons of computerized and noncomputerized versions of other neuropsychological tests have not consistently demonstrated equivalence. Research comparing standard and computerized versions of the Category Test has been equivocal; some supporting their equivalence (Choca & Morris, 1992; Mercer, Harrell, Miller, Childs, & Rockers, 1997) and at least one study indicating that the standard version was not equivalent to its computerized counterpart (Berger, Chibnall, & Gfeller, 1994). French and Beaumont s (1990) research supported equivalency for the standard and computer-administered versions of the Mill Hill Vocabulary Test, but not the Standard Progressive Matrices Test; they concluded that the two versions of the later test could not be used interchangeably, as patients performed significantly worse on the computerized version than on the standard version. Comparison of the traditional oral version of the Serial Digit Learning Test and a computerized version showed a correlation of only.23 compared with test retest correlation for repeated administrations of the oral version of.55 (Campbell et al., 1999). Furthermore, Mead and Dragsow (1993) reported a medium effect size of 0.72 for performance differences between computerized and noncomputerized versions of perceptual speed tests. These studies highlight the need to demonstrate that a computerized version of a test is equivalent to its standard examiner-administered counterpart. While it has long been recognized that adaptation of a traditional paper-and-pencil test to a computerized format may alter the nature of the test and that there is a need to assure psychometric equivalency (Hofer, 1985; Kane & Kay, 1992), the advantages of computerized administration may justify the use of computer versions for some clinicians in the absence of established psychometric equivalency with the traditional counterpart. As Schatz and Browndyke (2002) pointed out, computerized versions of the Wisconsin Card Sorting Test are widely utilized despite a lack of demonstrated equivalency with the standard manually administered version (Fortuny & Heaton, 1996). Given the WMT is the most robustly researched SVT and is considered by many neuropsychologists to be one of the most accurate SVTs (Hartman, 2002; Sharland & Gfeller, 2007), demonstration of the equivalency between the standard and computer versions is of critical import. Although test developers have historically cautioned against using computer-administered versions as interchangeable equivalents of the traditional examiner-administered version (Letz & Baker, 1986), the oral and computerized versions of

3 L.L. Hoskins et al. / Archives of Clinical Neuropsychology 25 (2010) the WMT tend to be utilized by most clinicians as interchangeable versions despite the possibility that the two versions may not yield equivalent results. The social demand characteristics of an in-person administration might affect the performance on a test sensitive to effort, and the performance might differ depending on whether or not a live person (i.e., examiner) is sitting in front of the examinee. Situational variables, such as the presence of third-party observers and audio-recording, have been demonstrated to have a potent effect on neuropsychological test performance (Binder & Johnson-Greene, 1995; Constantinou, Ashendorf, & McCaffrey, 2002; Yantz & McCaffrey, 2009). The presence of an examiner may improve or worsen the performance depending on the motivation of the examinee. The performance on the computerized Test of Memory Malingering in undergraduate research volunteers was associated with worse performance when an examiner was not present to observe an examinee s performance, but the same effect did not occur for the computerized Wisconsin Card Sorting Test (Yantz & McCaffrey, 2007). This finding suggests that an examiner s presence and attention to an examinee s performance may affect the task performance on computerized SVTs. As previously mentioned, the original and computerized versions of the WMT differ with regard to modality presentation of the word pairs (i.e., auditory vs. visual, respectively). Thus, it is possible that such differences produce discrepant results due to the different aspects or types of memory function (i.e., verbal memory vs. visual memory or possible dual encoding of stimuli) being tapped by a particular version. Another difference between the two versions is that only the computerized version provides feedback regarding accuracy during the primary effort trials. The examiner does not provide feedback in the oral version. Although the computerized versions of the WMT are widely used and the oral and computerized forms have been described as appearing equivalent (Green et al., 2002, p. 100), we were unable to find a single published study that compared the performance on the computerized version with the orally administered version. In an online published study, Allen and Green s (2002) data indicated that the two versions yielded similar results on the primary effort measures of IR, DR, and CNS. There were no significant differences between scores on the two forms, although mean scores were found to be up to 7% higher for the oral WMT version than for the computerized administration in individuals suspected of putting forth suboptimal effort. Given this nonsignificant trend, they postulated that face-to-face oral administration may moderate the degree to which an individual not putting forth full effort performs poorly on the WMT whereas the largely unobserved computerized administration contributes to exaggerated SVT performance in those who are not inclined to put forth full effort. Although Allen and Green concluded that the two versions are equivalent for their overall sample, the presence of a possible observer effect for face-to-face versus computerized administration of the WMT led the authors to state, that SVTs that require face-to-face administration may be inferior to unattended computerized administration for discovering suboptimal effort during neuropsychological examination (2000, p. 840). Although it is asserted in the WMT manual (Green, 2005) that the data collected from the oral version is applicable to the computerized version, it is important to empirically demonstrate equivalence of the oral and computer-administered versions of the WMT given the clinical and medicolegal implications of WMT failure. Utilized as an SVT to detect suboptimal effort and symptom exaggeration, failure on the WMT can be interpreted as evidence that the examinee may be feigning memory impairment. Such a performance on the WMT leads examiners to question the validity of the additional data obtained during an assessment and often raises the question of malingering. If the oral and computer versions of the WMT are not equivalent, interpretation of failures should include consideration of the version administered. The present study examined the equivalency of the computerized version of the IR, DR, and CNS scores of the WMT (Green, 2003, 2005) to same scores derived from the orally administered version of the test in two clinical populations. One sample consisted of inpatients referred for neuropsychological examinations during continuous video-telemetry EEG monitoring for evaluation of medically intractable seizures at the University of Washington (UW) Regional Epilepsy Center. The other sample consisted of clinical and forensic referrals to an outpatient neuropsychological practice in the Portland, Oregon, area. Materials and Methods Subjects The subjects were divided into two groups: (i) inpatients at the UW Regional Epilepsy Center and (ii) examinees at the outpatient private practice of one of the authors (LMB) in Oregon, including both forensic and clinical referrals. There were no significant differences between the two groups with regard to age, t(123) ¼ 1.31; p ¼.19, and education, t(123) ¼ 1.78; p ¼.078. All patients were 18 years of age or older, native English speakers, and able to give consent and understand all test instructions.

4 594 L.L. Hoskins et al. / Archives of Clinical Neuropsychology 25 (2010) Inpatient participants. The inpatient sample in this study included a consecutive sample of adult patients referred to UW Regional Epilepsy Center for continuous video-eeg monitoring during a 6-month period of All patients underwent video-eeg monitoring for a minimum of 24 hr and most completed an extensive battery of neuropsychological tests and underwent an MRI scan of the brain. Some epilepsy patients also received additional diagnostic tests, such as PET or SPECT scans or invasive monitoring (i.e., grid or strip monitoring), in order to more definitively localize or lateralize their events. All inpatients were diagnosed based on their video-eeg recordings by one of three board certified electroencephalographers, and were given one of the five diagnoses based on the following criteria: (a) Epileptic Seizures (ES) Evidence of definite ictal EEG abnormalities or interictal epileptiform discharges; (b) Psychogenic Non-Epileptic Seizures (PNES) Episodes of unresponsiveness or behavioral abnormality in the absence of epileptiform EEG changes; (c) Indeterminate Spells (IS) No spells during monitoring or subjective feelings only, in the absence of EEG abnormality, unresponsiveness, or behavioral abnormality; (d) Co-Occurrence Group (CO) Evidence of episodes fitting the criteria for both ES and PNES during the same or across multiple monitoring sessions; or (e) Physiological Non-Epileptic Seizures (PhyNES) This included patients with spells resulting from medical conditions other than epilepsy (i.e., syncopal episodes, sleep disturbance). For the purposes of this study, we excluded all patients experiencing acute seizures (,24 hr prior to WMT administration, n ¼ 25), as we have found that post-ictal testing with the WMT differs from baseline in many of these individuals (Williamson et al., 2005). We also excluded those who had undergone previous neurosurgery (n ¼ 6), those who were too severely impaired to be tested in any fashion (e.g., developmentally delayed and completely nonverbal, n ¼ 5), and patients whose primary language was not English. We did not exclude cognitively impaired patients on the basis of any other criteria (e.g., some studies exclude those who are institutionalized or cannot live independently), as we wanted to see how low-functioning individuals perform on the two versions of the tests. The UW inpatient sample consisted of 23 male and 44 female inpatients, 61 Caucasians (91%), 2 African Americans (3%), 2 Hispanics (3%), 1 African American Caucasian (1.5%), and 1 Native American-Caucasian (1.5%). Diagnostic make-up of the inpatient sample consisted of 31 individuals (46.3%) diagnosed with epilepsy, 30 individuals (44.8%) with PNES, 2 individuals (3%) with co-occurrence of epilepsy and PNES, and 4 individuals (6%) with sleep disturbance. Age ranged between 18 and 83 years (mean ¼ 38.50, SD ¼ 12.05). Completed years of education ranged between 4 and 18 years (mean ¼ 12.70, SD ¼ 2.69). Outpatient participants. The outpatient sample consisted of 37 men and 21 women, 55 Caucasians (94.8%), and 3 African Americans (5.2%). Diagnostically, the sample consisted of 21 individuals (36.2%) with mild head trauma, 11 individuals (19.0%) with developmental or learning disorders, 9 individuals (15.5%) with psychiatric disorders, 7 individuals with moderate severe traumatic brain injury (12.1%), 3 individuals with unknown diagnoses (5.2%), 2 individuals with dementia, (3.4%), 2 individuals (3.4%) with epilepsy, 1 individual (1.7%) with medically unexplained symptoms, 1 individual (1.7%) with a stroke, and 1 individual (1.7%) with hydrocephalus. Diagnoses were performed on a clinical basis from history, neurodiagnostic studies, and medical diagnoses. Age ranged between 17 and 71 years (mean ¼ 42.22, SD ¼ 14.72). Completed years of education ranged between 9 and 20 years (mean ¼ 13.55, SD ¼ 2.64). Participants in both the inpatient and outpatient samples were randomly assigned to either the computerized or oral administration groups. In the inpatient setting, we had traditionally administered the oral WMT, which was the original format of presentation for this test, but shifted to the computer WMT for 5 weeks for quality control purposes (i.e., before deciding whether or not to switch to the newer, computer version of the WMT, we wanted to be sure that both measures produced equivalent results). All consecutive examinees referred during that 5-week period were administered the computer version of the WMT, whereas examinees referred prior to or subsequent to this period all received the oral version. All inpatients included in this study came from an institutional review board-approved database registry maintained by the UW Regional Epilepsy Center, and all patients included in this database provided written consent to allow their data, including all disease-related and demographic variables and neuropsychological and neuroimaging results, to be stored on an ongoing basis for clinical research. As we requested the data used in the current study retrospectively, all inpatients were blind to any study specifics. In the outpatient setting, randomization was accomplished by listing the order of randomization, and then assigning consecutive examinees to either condition according to the list. All outpatient examinees included in this study provided informed consent to participate in a clinical evaluation, and their data were subsequently de-identified and entered into a research database without any individually identifiable health information. Among the inpatient sample, there were 33 subjects (10 men and 23 women) in the oral administration group with a mean age of years (SD ¼ 13.33) and 34 subjects (13 men and 21 women) in the computerized administration group with a mean age of years (SD ¼ 10.68). Mean WAIS-III FSIQ for the oral group was (SD ¼ 19.07) and (SD ¼ 17.36) for the computer group. Within the inpatient sample, there were no significant differences between the groups on age, U ¼ ; p ¼.48; FSIQ, U ¼ 48; p ¼.40, or education, U ¼ 466; p ¼.23. There were similar numbers of men and women in the oral and computer administration groups, x 2 (1) ¼ 0.18; p ¼.67.

5 In the outpatient sample, there were 30 subjects (19 men and 11 women) in the oral administration group with a mean age of years (SD ¼ 14.77) and 28 subjects (18 men and 10 women) in the computerized administration group with a mean age of years (SD ¼ 14.80). Mean WAIS-III FSIQ for the oral group was (SD ¼ 15.43) and (SD ¼ 15.23) for the computer group. There were no significant differences between the groups on age, U ¼ 357; p ¼.33, FSIQ, U ¼ ; p ¼.10, or education, U ¼ ; p ¼.67. There were similar numbers of men and women in the oral and computer administration groups, Pearson x 2 (1) ¼ 0.001; p ¼ Procedure L.L. Hoskins et al. / Archives of Clinical Neuropsychology 25 (2010) All patients completed a comprehensive neuropsychological assessment that included either the oral or computer version of the WMT. For the inpatient sample, the WMT was administered during the first hour of testing, and preceded all other memory measures. Order of test administration was not controlled in the outpatient sample. The inpatient and the outpatient samples were analyzed separately, given the potentially disparate characteristics of these two samples. For the purposes of this study, only the data from the WMT primary effort measures, including IR, DR trial, and CNS, were utilized. Both versions of the WMT were administered according to standardized instructions. Green and Astner (1995) provided a description of the oral version and instructions for administration. Green (2003) provided a description of the computer version and instructions for administration. Results Because the data violated the assumption of normality, between-group comparisons of the WMT effort measures were analyzed using Mann Whitney nonparametric analyses. There were no significant differences between the two versions on the DR effort measures for either the inpatient sample, DR: U ¼ 482; p ¼.32, or the outpatient sample, DR: U ¼ ; p ¼.44. However, there was a nonsignificant trend with outpatients administered the oral version scoring lower on CNS than those administered the computer version, CNS: U ¼ ; p ¼.07. Inpatients did not significantly differ on CNS, U ¼ ; p ¼.11. On the IR measurement in the inpatient sample, those administered the computer version scored significantly higher, U ¼ ; p ¼.02, although the effect size of 0.15 (Cohen s d) was small. However, for the outpatient sample, there was no significant difference in IR effort scores between versions, U ¼ ; p ¼.17. Means and standard deviations for percentages of WMT variables for the inpatient and outpatient samples are listed in Tables 1 and 2. The published failure cutoff score for the WMT effort measures was utilized (Green et al., 1996). The specific cutoff scores that identify poor effort are not identified here but are available from the authors and can be found in the test manual, per Sweet s (1999) recommendation. Failure rates for the inpatient and outpatient samples are presented in Table 3. Among the inpatient sample, 10 (30.3%) subjects in the oral administration group and 9 (26.5%) of subjects in the computer administration group failed at least one of the three effort measures. Among the outpatient sample, 12 subjects (40.0%) in the oral administration group and 7 subjects (25.0%) in the computer administration group failed at least one of the three effort measures. Failure rates were not significantly different for the oral and computer administration groups among the inpatient sample, Pearson x 2 (1, N ¼ 67) ¼ 0.01; p ¼.94, or the outpatient sample, Pearson x 2 (1, N ¼ 58) ¼.88; p ¼.35. The combined failure rate for both WMT versions was 28.4% for the inpatient sample and 32.8% for the outpatient sample. Equivalency was further examined by breaking each sample down by diagnostic subgroups. Among the inpatient sample, there were no significant differences in mean scores between versions for participants diagnosed with epilepsy, IR: U ¼ 91.50; p ¼.26, DR: U ¼ 117; p ¼.94, CNS: U ¼ 107; p ¼.63, or PNES, IR: U ¼ 74; p ¼.11, DR: U ¼ 89; p ¼.34, CNS: U ¼ 81; p ¼.20. However, while the differences in inpatient failure rates did not reach statistical significance, numerical differences were evident; specifically, epilepsy patients failed the computer version more frequently than the oral version (31.3% vs. Table 1. Means and standard deviations for the WMT effort measures: Inpatient sample Oral (n ¼ 33) Computer (n ¼ 34) IR percentage DR percentage CNS percentage

6 596 L.L. Hoskins et al. / Archives of Clinical Neuropsychology 25 (2010) Table 2. Means and standard deviations for the WMT effort measures: outpatient sample Oral (n ¼ 30) Computer (n ¼ 28) IR percentage DR percentage CNS percentage Table 3. Inpatient and outpatient WMT failure rates Sample Oral Computer Inpatient 10 (30.3%) 9 (26.5%) Outpatient 12 (40.0%) 7 (25.0%) Note: Failure rates are shown as the number of subjects scoring below the published cutoffs, with the percentage of the group failing showing in parentheses. Table 4. Means and standard deviations for the WMT effort measures: Epilepsy participants Oral (n ¼ 14) Computer (n ¼ 17) IR percentage DR percentage CNS percentage Table 5. Means and standard deviations for the WMT effort measures: PNES participants Oral (n ¼ 16) Computer (n ¼ 14) IR percentage DR percentage CNS percentage Note: WMT ¼ Word Memory Test; M ¼ Mean; SD ¼ Standard Deviation; PNES ¼ Psychogenic Non-Epileptic Seizures; IR ¼ Immediate Recognition; DR ¼ Delayed Recognition; CNS ¼ Consistency. 14.3%), Pearson x 2 (1, N ¼ 31) ¼ 0.33; p ¼.57, and PNES patients more frequently failed the oral version (43.8% vs. 28.6%), Pearson x 2 (1, N ¼ 30) ¼ 0.23; p ¼.63. Means and standard deviations for the epilepsy participants are presented in Table 4. Means and standard deviations for the PNES participants are presented in Table 5. In a follow-up analysis, we found that there was a nonsignificant trend for PNES patients to fail the oral version more than patients with epilepsy (43.8% vs. 14.3%), Pearson x 2 (1, N ¼ 30) ¼ 3.09; p ¼.08, yet there was virtually no difference in their performance on the computer version (28.6% for PNES vs. 31.3% for epilepsy), Pearson x 2 (1, N ¼ 30) ¼ 0.03; p ¼.87. Such a difference in discriminatory power based on test version may have significant clinical implications but would need to be replicated given our small sample size. Among the outpatient sample, there were no significant differences in mean scores found between versions for participants with history of mild head trauma, IR: U ¼ 36.50; p ¼.19, DR: U ¼ 39; p ¼.25, CNS: U ¼ 35.50; p ¼.17, developmental/ learning disabilities, IR: U ¼ 13.50; p ¼.78, DR: U ¼ 10; p ¼.32, CNS: U ¼ 10; p ¼.35, or psychiatric conditions,

7 L.L. Hoskins et al. / Archives of Clinical Neuropsychology 25 (2010) Table 6. Means and standard deviations for the WMT effort measures: Mild head trauma participants Oral (n ¼ 10) Computer (n ¼ 11) IR percentage DR percentage CNS percentage Table 7. Means and standard deviations for the WMT effort measures: Learning disability participants Oral (n ¼ 6) Computer (n ¼ 5) IR percentage DR percentage CNS percentage Table 8. Means and standard deviations for the WMT effort measures: Psychiatric participants Oral (n ¼ 5) Computer (n ¼ 4) IR percentage DR percentage CNS percentage IR: U ¼ 8; p ¼.61, DR: U ¼ 7.50; p ¼.52, CNS: U ¼ 7.50; p ¼.52. There were no significant differences in failure rates for any of the diagnostic subgroups within the outpatient sample. For the developmental/learning disability group, none of the six examinees failed the oral version (0.0%) and one of the five examinees failed the computer version (20.0%), Pearson x 2 (1, N ¼ 11) ¼ 0.01; p ¼.92. Among the mild head trauma group, 5 of 10 examinees failed the oral version (50.0%) and 3 of 11 examinees failed the computer version (27.3%), Pearson x 2 (1, N ¼ 21) ¼ 0.386; p ¼.53. For the psychiatric group, 2 of 5 examinees failed the oral version (40.0%) and 0 of 4 examinees failed the computer version (0.0%), Pearson x 2 (1, N ¼ 9) ¼ 0.39; p ¼.53. The remaining outpatient groups (moderate severe traumatic brain injury, unknown diagnoses, dementia, epilepsy, medically unexplained symptoms, stroke, and hydrocephalus) each had seven or less examinees in each group and failure rates were not compared across versions. Means and standard deviations for participants diagnosed with mild head injury are presented in Table 6, learning disabilities in Table 7, and psychiatric conditions in Table 8. Discussion Demonstration of equivalency of test formats is imperative to confirm that factors specific to computer administration do not modify test performance (Choca & Morris, 1992). The present study examined the equivalency of the computerized version and the orally administered version of the WMT in two clinical populations. We found negligible differences between the two versions of the WMT in either the outpatient or inpatient samples when considering each sample as a whole. No significant differences were found for rate of failure for either the outpatient or the inpatient samples. There were no differences in mean scores on the primary effort measures (IR and DR) in the outpatient sample. The CNS mean score difference in the outpatient sample approached significance with those administered the oral version scoring lower than those administered the computer version. In the inpatient sample, performance was equivalent

8 598 L.L. Hoskins et al. / Archives of Clinical Neuropsychology 25 (2010) on the DR and CNS variables. While there was a significant difference in the inpatient sample on IR means, the resulting effect size was small. This difference again reflected a slightly lower mean for the oral versus the computer version. Although small sample sizes may limit the usefulness of comparisons within specific diagnostic groups, we decided to conduct preliminary subgroup analyses to guide future research. No significant mean score differences on the primary effort measures were found between versions for the following diagnostic subgroups: Epilepsy, PNES, mild head trauma, learning disabilities, or psychiatric conditions. However, patients with epilepsy more often failed the computer version (31.3%) rather than the oral version (14.3%), while patients with PNES showed the opposite pattern of performance (i.e., 28.6% failed the computer version but 43.8% failed the oral version). Future research with larger samples should investigate whether the oral version has more discriminatory power for these subgroups than the computer version. We should note that at least two of the lower functioning epilepsy patients (both developmentally delayed) who failed the computer version had clear-cut difficulty with the physical completion of the task. For example, one of them held the space bar down as more than one item passed. According to the author of the WMT, it would have been acceptable for the examiner to manually enter the responses for these patients under these circumstances (P. Green, personal communication, 2007), and this approach may have lessened the rate of failure among low-functioning individuals. Without their inclusion, the failure rates for epilepsy patients would have been more equivalent between the two WMT versions (computer WMT ¼ 21.4%, oral WMT ¼ 14.3%). Likewise, with the current sample sizes, small shifts in the pass fail distributions could greatly alter the x 2 results. Overall, more research with larger sample sizes is needed to determine whether the two versions are equivalent for various diagnostic subgroups. The failure rate within our outpatient sample was higher than those reported by Green (2003) for persons with no external incentives for poor neuropsychological performance. This is important to note, as a recent study designed to look at specificity and sensitivity estimated that 20% of WMT failures may be false positives and that lower cutoff scores improve the specificity of the test (Greve, Ord, Curtis, Bianchini, & Brennan, 2008). In our outpatient sample, for example, the groups with developmental learning and psychiatric problems were largely without financial incentives, yet 3 of 20 in those combined groups failed at least one of the primary effort measures. Such results could suggest that failures in patients with clear-cut brain dysfunction and other forms of pathology are more common than in Green s sample. However, we should also point out that a genuine memory impairment profile has been developed for the WMT using additional subtests not administered to all the outpatient sample (Green, 2005), and the genuine memory impairment profile was not employed in the current study or in the study by Greve and colleagues (2008). This profile is intended to eliminate potential false-positive errors among patients experiencing true impairment, and a preliminary study employing it appears to be promising (Green et al., 2009). Therefore, a change in cutoff scores appears to be premature, although further research is recommended, exploring sensitivity and specificity issues across various patient subgroups with both versions of the WMT. The failure rate observed in the current study for the inpatient epilepsy monitoring sample (23.0% of ES and 36.7% PNES patients failed the WMT regardless of version) is generally consistent with our prior work (Drane et al., 2006; Williamson et al., 2004; see Williamson, Drane, & Stroup, 2007 for specificity data). The base rate of failure on the WMT for ES patients in this study is consistent with that observed (22%) in a similar patient group on other commonly utilized SVT measures as well (Cragar, Berry, Fakhoury, Cibula, & Schmitt, 2006). However, our PNES failure rates for the WMT, both reflected by our current data and our previous work, are somewhat higher than the results obtained by Cragar with differing SVT measures (24%; Cragar et al., 2006). As Cragar and colleagues concluded, this may relate to differences in sensitivity to poor task engagement. This seems particularly plausible as the discrepancy between groups appears to be present for the PNES data, but not for the ES data. We have previously reported a slightly higher rate of failure (28.0%) on the WMT in an unselected epilepsy sample that included patients tested post-ictally and those that were institutionalized (Williamson et al., 2004). Our exclusion of all postictal patients in the current study may account for the slightly lower rate of failure in our current sample. We have also previously reported slightly higher WMT failure rates (40% 50%) for PNES patients than found in the current PNES sample (Drane et al., 2006; Williamson et al., 2004); however, the higher failure rates previously reported were obtained from samples in which only the oral version was administered. In the current study, it is the slightly lower failure rate on the computer version that appears to be bringing down the overall rate for the PNES sample. One could speculate that perhaps the provision of feedback regarding accuracy on the computer version or the absence of an examiner alters the performance of the PNES patients in a manner not observed in ES patients. In summary, the mean scores generally support equivalency of the orally administered version and the computerized version of the WMT in the outpatient sample; however, there is a possible lack of equivalency in the inpatient sample. Within a mixed outpatient sample of the type described here, computerized administration of the WMT does not appear to significantly alter performance on the effort measures. Failure rates for the two versions should be further explored by subgroup, to investigate whether performance patterns differ at this level. Subsequent validity papers are needed that examine the rate of WMT failure

9 in various patient populations, and we recommend that these employ the new genuine impairment profile based on our current experience (i.e., using the primary effort indices alone would likely lead to false-positive errors with more impaired patients) and current test publisher recommendations. Conflict of Interest Laurence Binder has used the oral version of the WMT in his forensic practice. Acknowledgement L.L. Hoskins et al. / Archives of Clinical Neuropsychology 25 (2010) We would like to thank the anonymous reviewers for their helpful suggestions and feedback. References Agnew, J., Schwartz, B. S., Bolla, K. I., Ford, D. P., & Bleecker, M. L. (1991). Comparison of computerized and examiner-administered neurobehavioral testing techniques. Journal of Occupational Medicine, 33, Allen, L. M., & Green, P. (2000). Moderated memory deficit exaggeration in face-to-face administration of the Word Memory Test. Archives of Clinical Neuropsychology, 15, Allen, L. M., & Green, P. (2002). Equivalence of the computerized and orally administered Word Memory Test effort measures. WebPsychEmpiricist, Retrieved July 30, 2010 from Berger, S. G., Chibnall, J. T., & Gfeller, J. D. (1994). The Category Test: A comparison of computerized and standard versions. Assessment, 1, Binder, L. M., & Johnson-Greene, D. (1995). Observer effects on neuropsychological performance: A case report. The Clinical Neuropsychologist, 9, Campbell, K. A., Rohlman, D. S., Storzbach, D., Binder, L. M., Anger, W. K., Kovera, C. A., et al. (1999). Test-retest reliability of psychological and neurobehavioral tests self-administered by computer. Assessment, 6, Choca, J., & Morris, J. (1992). Administering the Category Test by computer: Equivalence of results. The Clinical Neuropsychologist, 6, Constantinou, M., Ashendorf, L., & McCaffrey, R. J. (2002). When the third party observer of a neuropsychological evaluation is an audio-recorder. The Clinical Neuropsychologist, 16, Cragar, D. E., Berry, D. T. R., Fakhoury, T. A., Cibula, J. E., & Schmitt, F. A. (2006). Performance of patients with epilepsy or psychogenic non-epileptic seizures on four measures of effort. The Clinical Neuropsychologist, 20, Drane, D. L., Williamson, D. J., Stroup, E. S., Holmes, M. D., Jung, M., Koerner, E., et al. (2006). Cognitive impairment is not equal in patients with epileptic and psychogenic nonepileptic seizures. Epilepsia, 47, Fortuny, L. A., & Heaton, R. K. (1996). Standard versus computerized administration of the Wisconsin Card Sorting Test. The Clinical Neuropsychologist, 10, French, C. C., & Beaumont, J. G. (1990). A clinical study of the automated assessment of intelligence by the Mill Hill Vocabulary Test and the Standard Progressive Matrices Test. Journal of Clinical Psychology, 46, Goodrich-Hunsaker, N. J., & Hopkins, R. O. (2009). Word Memory Test performance in amnestic patients with hippocampal damage. Neuropsychology, 23, Green, P. (2003). Green s Word Memory Test for Windows user s manual. Edmonton: Green s Publishing. Green, P. (2005). Green s Word Memory Test for Windows user s manual Revised. Edmonton: Green s Publishing. Green, P., Allen, L., & Astner, K. (1996). The Word Memory Test: A user s guide to the oral and computer-assisted forms, US version 1.1. Durham, NC: CogniSyst. Green, P., & Astner, K. (1995). Manual: Word Memory Test (Research Form I) Oral Administration. Durham, NC: CogniSyst. Green, P., Flaro, L., & Courtney, J. (2009). Examining false positives on the Word Memory Test in adults with mild traumatic brain injury. Brain Injury, 23, Green, P., Iverson, G. L., & Allen, L. (1999). Detecting malingering in head injury litigation with the Word Memory Test. Brain Injury, 13, Green, P., Lees-Haley, P. R., & Allen, L. M. (2002). The Word Memory Test and the validity of neuropsychological test scores. Journal of Forensic Neuropsychology, 2, Greve, K. W., Ord, J., Curtis, K. L., Bianchini, K. J., & Brennan, A. (2008). Detecting malingering in traumatic brain injury and chronic pain: A comparison of three forced-choice symptom validity tests. The Clinical Neuropsychologist, 22, Hartman, D. E. (2002). The unexamined lie is a lie worth fibbing: Neuropsychological malingering and the Word Memory Test. Archives of Clinical Neuropsychology, 17, Hofer, P. J. (1985). Developing standards for computerized psychological testing. Computers in Human Behavior, 1, Kane, R. L., & Kay, G. G. (1992). Computerized assessment in neuropsychology: A review of tests and test batteries. Neuropsychology Review, 3, Letz, R., & Baker, E. L. (1986). Computer-administered neurobehavioral testing in occupational health. Seminars in Occupational Medicine, 1, Lichtenberger, E. O. (2006). Computer utilization and clinical judgment in psychological assessment reports. Journal of Clinical Psychology, 62, Mead, A. D., & Dragsow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin, 114, Mercer, W. N., Harrell, E. H., Miller, D. C., Childs, H. W., & Rockers, D. M. (1997). Performance of brain-injured versus healthy adults on three versions of the Category Test. The Clinical Neuropsychologist, 11, Russell, E. W. (2000). The application of computerized scoring programs to neuropsychological assessment. In R. D. Vanderloeg (Ed.), Clinician s guide to neuropsychological assessment. (2nd ed., pp ). New Jersey: Laurence Erlbaum Associates. Schatz, P., & Browndyke, J. (2002). Applications of computer-based neuropsychological assessment. Journal of Head Trauma Rehabilitation, 17,

10 600 L.L. Hoskins et al. / Archives of Clinical Neuropsychology 25 (2010) Sharland, M. J., & Gfeller, J. D. (2007). A survey of neuropsychologists beliefs and practices with respect to the assessment of effort. Archives of Clinical Neuropsychology, 22, Sweet, J. J. (1999). Malingering: Differential diagnosis. In J. J. Sweet (Ed.), Forensic Neuropsychology: Fundamentals and Practice (pp ). New York: Swets & Zeitlinger. Williamson, D. J., Drane, D. L., & Stroup, E. S. (2007). Symptom validity tests in the epilepsy clinic. In K. B. Boone (Ed.), Assessment of Feigned Cognitive Impairment: A Neuropsychological Perspective (pp ). New York: The Guilford Press. Williamson, D. J., Drane, D. L., Stroup, E. S., Holmes, M. D., Wilensky, A. J., & Miller, J. W. (2005). Recent seizures may distort the validity of neurocognitive test scores in patients with epilepsy. Epilepsia, 46(Suppl. 8), 74. Williamson, D. J., Drane, D. L., Stroup, E. S., Miller, J. W., Holmes, M. D., & Wilensky, A. J. (2004). Detecting cognitive differences between patients with epilepsy and patients with psychogenic nonepileptic seizures: Effort matters. Epilepsia, 45(Suppl. 7), 179. Wynkoop, T. F., & Denney, R. L. (2005). Test review: Green s Word Memory Test (WMT) for Windows. Journal of Forensic Neuropsychology, 4, Yantz, C. J., & McCaffrey, R. J. (2007). Social facilitation effect of examiner attention or inattention to computer-administered neuropsychological tests: First sign that the examiner may affect results. The Clinical Neuropsychologist, 21, Yantz, C. L., & McCaffrey, R. J. (2009). Effects of parental presence and child characteristics on children s neuropsychological test performance: Third party observer effect confirmed. The Clinical Neuropsychologist, 23,

KEVIN J. BIANCHINI, PH.D., ABPN

KEVIN J. BIANCHINI, PH.D., ABPN KEVIN J. BIANCHINI, PH.D., ABPN Slick et al., 1999 Bianchini et al., 2005 4 4 Criterion A: Evidence of significant external incentive Criterion B: Evidence from physical evaluation 1. Probable effort

More information

The unexamined lie is a lie worth fibbing Neuropsychological malingering and the Word Memory Test

The unexamined lie is a lie worth fibbing Neuropsychological malingering and the Word Memory Test Archives of Clinical Neuropsychology 17 (2002) 709 714 The unexamined lie is a lie worth fibbing Neuropsychological malingering and the Word Memory Test David E. Hartman Private Practice/Chicago Medical

More information

The Albany Consistency Index for the Test of Memory Malingering

The Albany Consistency Index for the Test of Memory Malingering Archives of Clinical Neuropsychology 27 (2012) 1 9 The Albany Consistency Index for the Test of Memory Malingering Jessica H. Gunner 1, *, Andrea S. Miele 1, Julie K. Lynch 2, Robert J. McCaffrey 1,2 1

More information

Characterization of the Medical Symptom Validity Test in evaluation of clinically referred memory disorders clinic patients

Characterization of the Medical Symptom Validity Test in evaluation of clinically referred memory disorders clinic patients Archives of Clinical Neuropsychology 22 (2007) 753 761 Characterization of the Medical Symptom Validity Test in evaluation of clinically referred memory disorders clinic patients Abstract Laura L.S. Howe

More information

The Repeatable Battery for the Assessment of Neuropsychological Status Effort Scale

The Repeatable Battery for the Assessment of Neuropsychological Status Effort Scale Archives of Clinical Neuropsychology 27 (2012) 190 195 The Repeatable Battery for the Assessment of Neuropsychological Status Effort Scale Julia Novitski 1,2, Shelly Steele 2, Stella Karantzoulis 3, Christopher

More information

Improving the Methodology for Assessing Mild Cognitive Impairment Across the Lifespan

Improving the Methodology for Assessing Mild Cognitive Impairment Across the Lifespan Improving the Methodology for Assessing Mild Cognitive Impairment Across the Lifespan Grant L. Iverson, Ph.D, Professor Department of Physical Medicine and Rehabilitation Harvard Medical School & Red Sox

More information

Your choice of SVTs is fundamental to the Slick et al criteria Paul Green Ph.D. paulgreen@shaw.ca www.wordmemorytest.com Central to the criteria is the presence of cognitive symptom exaggeration or feigning

More information

Comparison of Performance of the Test of Memory Malingering and Word Memory Test in a Criminal Forensic Sample

Comparison of Performance of the Test of Memory Malingering and Word Memory Test in a Criminal Forensic Sample Archives of Clinical Neuropsychology Advance Access published May 5, 2015 Comparison of Performance of the Test of Memory Malingering and Word Memory Test in a Criminal Forensic Sample Rachel L. Fazio

More information

Differential diagnosis in the memory clinic: Exploring the value of improved neuropsychological examination Rienstra, A.

Differential diagnosis in the memory clinic: Exploring the value of improved neuropsychological examination Rienstra, A. UvA-DARE (Digital Academic Repository) Differential diagnosis in the memory clinic: Exploring the value of improved neuropsychological examination Rienstra, A. Link to publication Citation for published

More information

WPE. WebPsychEmpiricist

WPE. WebPsychEmpiricist McKinzey, R. K., Podd, M., & Kreibehl, M. A. (6/25/04). Concurrent validity of the TOMM and LNNB. WebPsychEmpiricist. Retrieved (date), from http://wpe.info/papers_table.html WPE WebPsychEmpiricist Concurrent

More information

Effects of severe depression on TOMM performance among disability-seeking outpatients

Effects of severe depression on TOMM performance among disability-seeking outpatients Archives of Clinical Neuropsychology 21 (2006) 161 165 Effects of severe depression on TOMM performance among disability-seeking outpatients Y. Tami Yanez, William Fremouw, Jennifer Tennant, Julia Strunk,

More information

Clinical Policy: Ambulatory Electroencephalography Reference Number: CP.MP.96

Clinical Policy: Ambulatory Electroencephalography Reference Number: CP.MP.96 Clinical Policy: Ambulatory Electroencephalography Reference Number: CP.MP.96 Effective Date: 09/15 Last Review Date: 09/17 See Important Reminder at the end of this policy for important regulatory and

More information

Neuropsychological Testing (NPT)

Neuropsychological Testing (NPT) Neuropsychological Testing (NPT) POLICY Psychological testing (96101-03) refers to a series of tests used to evaluate and treat an individual with emotional, psychiatric, neuropsychiatric, personality

More information

Performance profiles and cut-off scores on the Memory Assessment Scales

Performance profiles and cut-off scores on the Memory Assessment Scales Archives of Clinical Neuropsychology 19 (2004) 489 496 Performance profiles and cut-off scores on the Memory Assessment Scales Sid E. O Bryant a, Kevin Duff b, Jerid Fisher c, Robert J. McCaffrey a,d,

More information

Rapidly-administered short forms of the Wechsler Adult Intelligence Scale 3rd edition

Rapidly-administered short forms of the Wechsler Adult Intelligence Scale 3rd edition Archives of Clinical Neuropsychology 22 (2007) 917 924 Abstract Rapidly-administered short forms of the Wechsler Adult Intelligence Scale 3rd edition Alison J. Donnell a, Neil Pliskin a, James Holdnack

More information

Test review. Comprehensive Trail Making Test (CTMT) By Cecil R. Reynolds. Austin, Texas: PRO-ED, Inc., Test description

Test review. Comprehensive Trail Making Test (CTMT) By Cecil R. Reynolds. Austin, Texas: PRO-ED, Inc., Test description Archives of Clinical Neuropsychology 19 (2004) 703 708 Test review Comprehensive Trail Making Test (CTMT) By Cecil R. Reynolds. Austin, Texas: PRO-ED, Inc., 2002 1. Test description The Trail Making Test

More information

Using Neuropsychological Experts. Elizabeth L. Leonard, PhD

Using Neuropsychological Experts. Elizabeth L. Leonard, PhD Using Neuropsychological Experts Elizabeth L. Leonard, PhD Prepared for Advocate. Arizona Association for Justice/Arizona Trial Lawyers Association. September, 2011 Neurocognitive Associates 9813 North

More information

Elderly Norms for the Hopkins Verbal Learning Test-Revised*

Elderly Norms for the Hopkins Verbal Learning Test-Revised* The Clinical Neuropsychologist -//-$., Vol., No., pp. - Swets & Zeitlinger Elderly Norms for the Hopkins Verbal Learning Test-Revised* Rodney D. Vanderploeg, John A. Schinka, Tatyana Jones, Brent J. Small,

More information

NEUROPSYCHOLOGY TRACK COORDINATOR: Dr. Ellen Vriezen

NEUROPSYCHOLOGY TRACK COORDINATOR: Dr. Ellen Vriezen NEUROPSYCHOLOGY TRACK COORDINATOR: Dr. Ellen Vriezen The Neuropsychology Track offers two Resident Positions: NMS Code Number: 181516 1 position with an Adult emphasis, which provide training for residents

More information

Process of a neuropsychological assessment

Process of a neuropsychological assessment Test selection Process of a neuropsychological assessment Gather information Review of information provided by referrer and if possible review of medical records Interview with client and his/her relative

More information

Interpretive Report. Client Information

Interpretive Report. Client Information Interpretive Report Developed by Michelle R. Widows, PhD, Glenn P. Smith, PhD, and PAR Staff Client Information Client name: Sample Client Client ID: SIMS Test date: 08/12/2013 Date of birth: 02/03/1975

More information

Background 6/24/2014. Validity Testing in Pediatric Populations. Michael Kirkwood, PhD, ABPP/CN. Conflict of Interest Statement

Background 6/24/2014. Validity Testing in Pediatric Populations. Michael Kirkwood, PhD, ABPP/CN. Conflict of Interest Statement Validity Testing in Pediatric Populations Michael Kirkwood, PhD, ABPP/CN Background Board Certified Clinical Neuropsychologist at Children s Hospital Colorado Exclusively pediatric-focused Patient work

More information

Serial 7s and Alphabet Backwards as Brief Measures of Information Processing Speed

Serial 7s and Alphabet Backwards as Brief Measures of Information Processing Speed Pergamon Archives of Clinical Neuropsychology, Vol. 11, No. 8, pp. 651-659, 1996 Copyright 9 1996 National Academy of Neuropsychology Printed in the USA. All fights reserved 0887-6177/96 $15.00 +.00 PH

More information

MMPI-2 short form proposal: CAUTION

MMPI-2 short form proposal: CAUTION Archives of Clinical Neuropsychology 18 (2003) 521 527 Abstract MMPI-2 short form proposal: CAUTION Carlton S. Gass, Camille Gonzalez Neuropsychology Division, Psychology Service (116-B), Veterans Affairs

More information

An Initial Validation of Virtual Human Administered Neuropsychological Assessments

An Initial Validation of Virtual Human Administered Neuropsychological Assessments Annual Review of Cybertherapy and Telemedicine 2017 123 An Initial Validation of Virtual Human Administered Neuropsychological Assessments Thomas D. PARSONS a,*, Paul SCHERMERHORN b, Timothy MCMAHAN a,

More information

Comparison of Predicted-difference, Simple-difference, and Premorbid-estimation methodologies for evaluating IQ and memory score discrepancies

Comparison of Predicted-difference, Simple-difference, and Premorbid-estimation methodologies for evaluating IQ and memory score discrepancies Archives of Clinical Neuropsychology 19 (2004) 363 374 Comparison of Predicted-difference, Simple-difference, and Premorbid-estimation methodologies for evaluating IQ and memory score discrepancies Reid

More information

Healthy Children Get Low Scores Too: Prevalence of Low Scores on the NEPSY-II in Preschoolers, Children, and Adolescents

Healthy Children Get Low Scores Too: Prevalence of Low Scores on the NEPSY-II in Preschoolers, Children, and Adolescents Archives of Clinical Neuropsychology 25 (2010) 182 190 Healthy Children Get Low Scores Too: Prevalence of Low Scores on the NEPSY-II in Preschoolers, Children, and Adolescents Brian L. Brooks 1, *, Elisabeth

More information

Commentary on Delis and Wetter, Cogniform disorder and cogniform condition: Proposed diagnoses for excessive cognitive symptoms

Commentary on Delis and Wetter, Cogniform disorder and cogniform condition: Proposed diagnoses for excessive cognitive symptoms Archives of Clinical Neuropsychology 22 (2007) 683 687 Abstract Commentary Commentary on Delis and Wetter, Cogniform disorder and cogniform condition: Proposed diagnoses for excessive cognitive symptoms

More information

Published online: 12 Dec 2014.

Published online: 12 Dec 2014. This article was downloaded by: [52.1.27.110] On: 26 August 2015, At: 23:28 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: 5 Howick Place,

More information

International Journal of Forensic Psychology Copyright Volume 1, No. 3 SEPTEMBER 2006 pp. 1-21

International Journal of Forensic Psychology Copyright Volume 1, No. 3 SEPTEMBER 2006 pp. 1-21 International Journal of Forensic Psychology Copyright 2006 Volume 1, o. 3 SEPTEMBER 2006 pp. 1-21 The Pervasive Influence of Effort on europsychological Tests Paul Green + eurobehavioural Associates,

More information

Published online: 25 Aug 2014.

Published online: 25 Aug 2014. This article was downloaded by: [Dr Robert Denney] On: 27 March 2015, At: 10:31 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,

More information

Neuropsychological Performance in Cannabis Users and Non-Users Following Motivation Manipulation

Neuropsychological Performance in Cannabis Users and Non-Users Following Motivation Manipulation University at Albany, State University of New York Scholars Archive Psychology Honors College 5-2010 Neuropsychological Performance in Cannabis Users and Non-Users Following Motivation Manipulation Michelle

More information

Moving Beyond Ruling Out Epilepsy: It Is PNES!

Moving Beyond Ruling Out Epilepsy: It Is PNES! Moving Beyond Ruling Out Epilepsy: It Is PNES! Current Literature In Clinical Science Minimum Requirements for the Diagnosis of Psychogenic Nonepileptic Seizures: A Staged Approach. A Report From the International

More information

Interpreting change on the WAIS-III/WMS-III in clinical samples

Interpreting change on the WAIS-III/WMS-III in clinical samples Archives of Clinical Neuropsychology 16 (2001) 183±191 Interpreting change on the WAIS-III/WMS-III in clinical samples Grant L. Iverson* Department of Psychiatry, University of British Columbia, 2255 Wesbrook

More information

THE WORD READING TEST OF EFFORT IN ADULT LEARNING DISABILITY: A SIMULATION STUDY

THE WORD READING TEST OF EFFORT IN ADULT LEARNING DISABILITY: A SIMULATION STUDY The Clinical Neuropsychologist, 20: 315 324, 2006 Copyright # Taylor and Francis Group, LLC ISSN: 1385-4046 print=1744-4144 online DOI: 10.1080/13854040590947434 THE WORD READING TEST OF EFFORT IN ADULT

More information

Word Memory Test Performance in Amnesic Patients With Hippocampal Damage

Word Memory Test Performance in Amnesic Patients With Hippocampal Damage Neuropsychology 2009 American Psychological Association 2009, Vol. 23, No. 4, 529 534 0894-4105/09/$12.00 DOI: 10.1037/a0015444 Word Memory Test Performance in Amnesic Patients With Hippocampal Damage

More information

Empire BlueCross BlueShield Professional Commercial Reimbursement Policy

Empire BlueCross BlueShield Professional Commercial Reimbursement Policy Subject: Documentation Guidelines for Central Nervous System Assessments and Tests NY Policy: 0046 Effective: 12/01/2014 11/30/2015 Coverage is subject to the terms, conditions, and limitations of an individual

More information

CHAPTER 5. The intracarotid amobarbital or Wada test: unilateral or bilateral?

CHAPTER 5. The intracarotid amobarbital or Wada test: unilateral or bilateral? CHAPTER 5 Chapter 5 CHAPTER 5 The intracarotid amobarbital or Wada test: unilateral or bilateral? SG Uijl FSS Leijten JBAM Arends J Parra AC van Huffelen PC van Rijen KGM Moons Submitted 2007. 74 Abstract

More information

Cognitive recovery after severe head injury 2. Wechsler Adult Intelligence Scale during post-traumatic amnesia

Cognitive recovery after severe head injury 2. Wechsler Adult Intelligence Scale during post-traumatic amnesia Journal of Neurology, Neurosurgery, and Psychiatry, 1975, 38, 1127-1132 Cognitive recovery after severe head injury 2. Wechsler Adult Intelligence Scale during post-traumatic amnesia IAN A. MANDLEBERG

More information

The effect of distraction on the Word Memory Test and Test of Memory Malingering performance in patients with a severe brain injury

The effect of distraction on the Word Memory Test and Test of Memory Malingering performance in patients with a severe brain injury Journal of the International Neuropsychological Society (2008), 14, 1074 1080. Copyright 2008 INS. Published by Cambridge University Press. Printed in the USA. doi:10.10170s135561770808137x BRIEF COMMUNICATION

More information

Potential for interpretation disparities of Halstead Reitan neuropsychological battery performances in a litigating sample,

Potential for interpretation disparities of Halstead Reitan neuropsychological battery performances in a litigating sample, Archives of Clinical Neuropsychology 21 (2006) 809 817 Potential for interpretation disparities of Halstead Reitan neuropsychological battery performances in a litigating sample, Abstract Christine L.

More information

CHAPTER 5 NEUROPSYCHOLOGICAL PROFILE OF ALZHEIMER S DISEASE

CHAPTER 5 NEUROPSYCHOLOGICAL PROFILE OF ALZHEIMER S DISEASE CHAPTER 5 NEUROPSYCHOLOGICAL PROFILE OF ALZHEIMER S DISEASE 5.1 GENERAL BACKGROUND Neuropsychological assessment plays a crucial role in the assessment of cognitive decline in older age. In India, there

More information

Improving Accuracy in the. through the use of Technology

Improving Accuracy in the. through the use of Technology Improving Accuracy in the Assessment of Malingering through the use of Technology Lisa Drago Piechowski, PhD, ABPP American School of Professional Psychology, Washington DC Key Points Assessment of malingering

More information

Admission Criteria Continued Stay Criteria Discharge Criteria. All of the following must be met: 1. Member continues to meet all admission criteria

Admission Criteria Continued Stay Criteria Discharge Criteria. All of the following must be met: 1. Member continues to meet all admission criteria CMS Local Coverage Determination (LCD) of Psychiatry and Psychology Services for Massachusetts, New York, and Rhode Island L33632 Outpatient Services Coverage Indications and Limitations Hospital outpatient

More information

MEDICAL POLICY No R4 NEUROPSYCHOLOGICAL AND PSYCHOLOGICAL TESTING

MEDICAL POLICY No R4 NEUROPSYCHOLOGICAL AND PSYCHOLOGICAL TESTING NEUROPSYCHOLOGICAL AND PSYCHOLOGICAL TESTING Effective Date: October 1, 2015 Review Dates: 7/07, 6/08, 6/09, 8/09, 8/10, 8/11, 8/12, 8/13, 8/14, 8/15, 8/16, 8/17 Date Of Origin: July 2007 Status: Current

More information

Traumatic Brain Injury for VR Counselors Margaret A. Struchen, Ph.D. and Laura M. Ritter, Ph.D., M.P.H.

Traumatic Brain Injury for VR Counselors Margaret A. Struchen, Ph.D. and Laura M. Ritter, Ph.D., M.P.H. Training Session 3a: Understanding Roles of Members of the Interdisciplinary Treatment Team, Evaluations by Team Members and the Utility of Evaluations Conducted by such Team Members. The Interdisciplinary

More information

The vulnerability to coaching across measures of malingering

The vulnerability to coaching across measures of malingering Louisiana State University LSU Digital Commons LSU Doctoral Dissertations Graduate School 2007 The vulnerability to coaching across measures of malingering Adrianne M. Brennan Louisiana State University

More information

Donald A. Davidoff, Ph.D., ABPDC Chief, Neuropsychology Department, McLean Hospital Assistant Professor of Psychology, Harvard Medical School

Donald A. Davidoff, Ph.D., ABPDC Chief, Neuropsychology Department, McLean Hospital Assistant Professor of Psychology, Harvard Medical School Donald A. Davidoff, Ph.D., ABPDC Chief, Neuropsychology Department, McLean Hospital Assistant Professor of Psychology, Harvard Medical School Interests: Adult/Geriatric/Forensic Neuropsychology ddavidoff@mclean.harvard.edu

More information

Award Number: W81XWH

Award Number: W81XWH AD Award Number: W81XWH-08-2-0050 TITLE: PT073853: Mild TBI Following Exposure to Explosive Devices: Device Characteristics, Neuropsychological Functioning, and Symptoms of Post-Traumatic Stress Disorder

More information

Criterion validity of the California Verbal Learning Test-Second Edition (CVLT-II) after traumatic brain injury

Criterion validity of the California Verbal Learning Test-Second Edition (CVLT-II) after traumatic brain injury Archives of Clinical Neuropsychology 22 (2007) 143 149 Criterion validity of the California Verbal Learning Test-Second Edition (CVLT-II) after traumatic brain injury Monica L. Jacobs, Jacobus Donders

More information

Noncredible Explanations of Noncredible Performance on Symptom Validity Tests

Noncredible Explanations of Noncredible Performance on Symptom Validity Tests Noncredible Explanations of Noncredible Performance on Symptom Validity Tests 5 Paul Green & Thomas Merten WHAT DO NEUROPSYCHOLOGICAL AND EFFORT TESTS MEASURE? Neuropsychological Tests Neuropsychologists

More information

Ecological Validity of the WMS-III Rarely Missed Index in Personal Injury Litigation. Rael T. Lange. Riverview Hospital.

Ecological Validity of the WMS-III Rarely Missed Index in Personal Injury Litigation. Rael T. Lange. Riverview Hospital. This is the authors version of a paper that will be published as: Lange, Rael T. Lange and Sullivan, Karen A. and Anderson, Debbie (2005) Ecological validity of the WMS-III Rarely Missed Index in personal

More information

THE VALIDITY OF THE LETTER MEMORY TEST AS A MEASURE OF MEMORY MALINGERING: ROBUSTNESS TO COACHING. A dissertation presented to.

THE VALIDITY OF THE LETTER MEMORY TEST AS A MEASURE OF MEMORY MALINGERING: ROBUSTNESS TO COACHING. A dissertation presented to. THE VALIDITY OF THE LETTER MEMORY TEST AS A MEASURE OF MEMORY MALINGERING: ROBUSTNESS TO COACHING A dissertation presented to the faculty of the College of Arts and Sciences of Ohio University In partial

More information

Estimates of the Reliability and Criterion Validity of the Adolescent SASSI-A2

Estimates of the Reliability and Criterion Validity of the Adolescent SASSI-A2 Estimates of the Reliability and Criterion Validity of the Adolescent SASSI-A 01 Camelot Lane Springville, IN 4746 800-76-056 www.sassi.com In 013, the SASSI Profile Sheets were updated to reflect changes

More information

Effects of Coaching on Detecting Feigned Cognitive Impairment with the Category Test

Effects of Coaching on Detecting Feigned Cognitive Impairment with the Category Test Archives of Clinical Neuropsychology, Vol. 15, No. 5, pp. 399 413, 2000 Copyright 2000 National Academy of Neuropsychology Printed in the USA. All rights reserved 0887-6177/00 $ see front matter PII S0887-6177(99)00031-1

More information

One-Month Test Retest Reliability of the ImPACT Test Battery

One-Month Test Retest Reliability of the ImPACT Test Battery Archives of Clinical Neuropsychology 28 (2013) 499 504 One-Month Test Retest Reliability of the ImPACT Test Battery Philip Schatz*, Charles S. Ferris Department of Psychology, Saint Joseph s University,

More information

Measurement and Classification of Neurocognitive Disability in HIV/AIDS Robert K. Heaton Ph.D University of California San Diego Ancient History

Measurement and Classification of Neurocognitive Disability in HIV/AIDS Robert K. Heaton Ph.D University of California San Diego Ancient History Measurement and Classification of Neurocognitive Disability in HIV/AIDS Robert K. Heaton Ph.D University of California San Diego Ancient History Group Means for NP and MMPI Variables N=381 Consecutive

More information

Clinical Policy: Digital EEG Spike Analysis

Clinical Policy: Digital EEG Spike Analysis Clinical Policy: Reference Number: CP.MP.105 Last Review Date: 01/18 Coding Implications Revision Log See Important Reminder at the end of this policy for important regulatory and legal information. Description

More information

Client/Testing Information

Client/Testing Information Revised Comprehensive Norms for an Expanded Halstead-Reitan Battery: Demographically Adjusted Neuropsychological Norms for African American and Caucasian Adults Developed By Robert K. Heaton, PhD, S. Walden

More information

TOPF (Test of Pre-Morbid Function)

TOPF (Test of Pre-Morbid Function) TEST OF PREMORBID FUNCTIONING TOPF (Test of Pre-Morbid Function) Case Studies TOPF (Test of Pre-Morbid Function) Case Studies Case Study 1 Client C is a 62-year-old White male with 18 years of education,

More information

Neuropsychological test performance of Hawaii high school athletes: Hawaii ImPACT normative data

Neuropsychological test performance of Hawaii high school athletes: Hawaii ImPACT normative data 1 Neuropsychological test performance of Hawaii high school athletes: Hawaii ImPACT normative data William T. Tsushima PhD, Ross Oshiro MS, and Daniel Zimbra BA Abstract Objective: Establishing normative

More information

The significance of sensory motor functions as indicators of brain dysfunction in children

The significance of sensory motor functions as indicators of brain dysfunction in children Archives of Clinical Neuropsychology 18 (2003) 11 18 The significance of sensory motor functions as indicators of brain dysfunction in children Abstract Ralph M. Reitan, Deborah Wolfson Reitan Neuropsychology

More information

NEUROPSYCHOLOGICAL ASSESSMENT S A R A H R A S K I N, P H D, A B P P S A R A H B U L L A R D, P H D, A B P P

NEUROPSYCHOLOGICAL ASSESSMENT S A R A H R A S K I N, P H D, A B P P S A R A H B U L L A R D, P H D, A B P P NEUROPSYCHOLOGICAL ASSESSMENT S A R A H R A S K I N, P H D, A B P P S A R A H B U L L A R D, P H D, A B P P NEUROPSYCHOLOGICAL EXAMINATION A method of examining the brain; abnormal behavior is linked to

More information

Effort-testing In Children Undergoing Psycho-Educational Assessment Using The Medical Symptom Validity Test

Effort-testing In Children Undergoing Psycho-Educational Assessment Using The Medical Symptom Validity Test Effort-testing In Children Undergoing Psycho-Educational Assessment Using The Medical Symptom Validity Test Helene Flamand University of Alberta Abstract Suboptimal effort affects the reliability and validity

More information

Simulated subaverage performance on the Block Span task of the Stanford-Binet Intelligence Scales- Fifth Edition

Simulated subaverage performance on the Block Span task of the Stanford-Binet Intelligence Scales- Fifth Edition Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2011 Simulated subaverage performance on the Block Span task of the Stanford-Binet Intelligence Scales- Fifth Edition

More information

Medical Symptom Validity Test Performance Following Moderate-Severe Traumatic Brain Injury: Expectations Based on Orientation Log Classification

Medical Symptom Validity Test Performance Following Moderate-Severe Traumatic Brain Injury: Expectations Based on Orientation Log Classification Archives of Clinical Neuropsychology 32 (2017) 339 348 Medical Symptom Validity Test Performance Following Moderate-Severe Traumatic Brain Injury: Expectations Based on Orientation Log Classification Abstract

More information

The Delis-Kaplan Executive Functions System Tower Test Resilience to Response Bias

The Delis-Kaplan Executive Functions System Tower Test Resilience to Response Bias Ursidae: The Undergraduate Research Journal at the University of Northern Colorado Volume 1 Number 2 Article 2 January 2012 The Delis-Kaplan Executive Functions System Tower Test Resilience to Response

More information

Minimizing Misdiagnosis: Psychometric Criteria for Possible or Probable Memory Impairment

Minimizing Misdiagnosis: Psychometric Criteria for Possible or Probable Memory Impairment Original Research Article DOI: 10.1159/000215390 Accepted: January 30, 2009 Published online: April 28, 2009 Minimizing Misdiagnosis: Psychometric Criteria for Possible or Probable Memory Impairment Brian

More information

EPILEPSY SURGERY EVALUATION IN ADULTS WITH SCALP VIDEO-EEG MONITORING. Meriem Bensalem-Owen, MD University of Kentucky

EPILEPSY SURGERY EVALUATION IN ADULTS WITH SCALP VIDEO-EEG MONITORING. Meriem Bensalem-Owen, MD University of Kentucky EPILEPSY SURGERY EVALUATION IN ADULTS WITH SCALP VIDEO-EEG MONITORING Meriem Bensalem-Owen, MD University of Kentucky DISCLOSURES Received grants for sponsored research as investigator from: UCB Eisai

More information

Executive dysfunction in traumatic brain injury: The effects of injury severity and effort on the Wisconsin Card Sorting Test

Executive dysfunction in traumatic brain injury: The effects of injury severity and effort on the Wisconsin Card Sorting Test This article was downloaded by: [Stephen F Austin State University] On: 25 May 2015, At: 10:17 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office:

More information

Informed consent in clinical neuropsychology practice Official statement of the National Academy of Neuropsychology

Informed consent in clinical neuropsychology practice Official statement of the National Academy of Neuropsychology Archives of Clinical Neuropsychology 20 (2005) 335 340 Informed consent in clinical neuropsychology practice Official statement of the National Academy of Neuropsychology D. Johnson-Greene Department of

More information

Stroke Drivers Screening Assessment European Version 2012

Stroke Drivers Screening Assessment European Version 2012 Stroke Drivers Screening Assessment European Version 2012 NB Lincoln, KA Radford, FM Nouri University of Nottingham Introduction The Stroke Drivers Screening Assessment (SDSA) was developed as part of

More information

Adapting Dialectical Behavior. Therapy for Special Populations

Adapting Dialectical Behavior. Therapy for Special Populations Adapting Dialectical Behavior Therapy for Special Populations Margaret Charlton, PhD, ABPP Aurora Mental Health Center Intercept Center 16905 E. 2nd Avenue Aurora, CO 80011 303-326-3748 MargaretCharlton@aumhc.org

More information

CLINICAL UTILITY OF THE PERSONALITY ASSESSMENT INVENTORY IN THE DIAGNOSIS OF NON-EPILEPTIC SEIZURES

CLINICAL UTILITY OF THE PERSONALITY ASSESSMENT INVENTORY IN THE DIAGNOSIS OF NON-EPILEPTIC SEIZURES CLINICAL UTILITY OF THE PERSONALITY ASSESSMENT INVENTORY IN THE DIAGNOSIS OF NON-EPILEPTIC SEIZURES Sharon L. Mason, M.A. Robert C. Doss, Psy.D. John R. Gates, M.D. This paper has been prepared specifically

More information

Psychological & Neuropsychological Test

Psychological & Neuropsychological Test An Independent Licensee of the Blue Cross and Blue Shield Association Psychological & Neuropsychological Test BEACON HEALTH STRATEGIES, LLC ORIGINAL EFFECTIVE DATE HAWAII LEVEL OF CARE CRITERIA 2013 CURRENT

More information

The Importance of Symptom Validity Testing with Children: WMT and MSVT. Lloyd Flaro Ed.D., Paul Green Ph.D. & Nina Blaskewitz Dipl.Psych.

The Importance of Symptom Validity Testing with Children: WMT and MSVT. Lloyd Flaro Ed.D., Paul Green Ph.D. & Nina Blaskewitz Dipl.Psych. The Importance of Symptom Validity Testing with Children: WMT and MSVT Lloyd Flaro Ed.D., Paul Green Ph.D. & Nina Blaskewitz Dipl.Psych. Abstract: It is almost self-evident that cognitive test results

More information

Neuropathophysiologyof

Neuropathophysiologyof Neuropathophysiologyof Epilepsy and Psychiatric Comorbidity & Diagnosis and Management of Non- Epileptic Attack Disorders N Child Neurologist Auckland City Hospital Psychiatric Disorders associated with

More information

Factors Influencing the Face Validity of Effort Tests: Timing of Warning and Feedback

Factors Influencing the Face Validity of Effort Tests: Timing of Warning and Feedback University of Montana ScholarWorks at University of Montana Graduate Student Theses, Dissertations, & Professional Papers Graduate School 2 Factors Influencing the Face Validity of Effort Tests: Timing

More information

NEUROCOGNITIVE VARIABLES UNDERLYING GROUP PERFORMANCE ON A MEASURE OF EFFORT: THE MEDICAL SYMPTOM VALIDITY TEST (MSVT) Julie Hart Covert, M.S.

NEUROCOGNITIVE VARIABLES UNDERLYING GROUP PERFORMANCE ON A MEASURE OF EFFORT: THE MEDICAL SYMPTOM VALIDITY TEST (MSVT) Julie Hart Covert, M.S. NEUROCOGNITIVE VARIABLES UNDERLYING GROUP PERFORMANCE ON A MEASURE OF EFFORT: THE MEDICAL SYMPTOM VALIDITY TEST (MSVT) Julie Hart Covert, M.S. Dissertation Prepared for the Degree of DOCTOR OF PHILOSOPHY

More information

(2010) 14 (1) ISSN

(2010) 14 (1) ISSN Al-Ghatani, Ali and Obonsawin, Marc and Al-Moutaery, Khalaf (2010) The Arabic version of the Stroop Test and its equivalency to the lish version. Pan Arab Journal of Neurosurgery, 14 (1). pp. 112-115.

More information

Comparability Study of Online and Paper and Pencil Tests Using Modified Internally and Externally Matched Criteria

Comparability Study of Online and Paper and Pencil Tests Using Modified Internally and Externally Matched Criteria Comparability Study of Online and Paper and Pencil Tests Using Modified Internally and Externally Matched Criteria Thakur Karkee Measurement Incorporated Dong-In Kim CTB/McGraw-Hill Kevin Fatica CTB/McGraw-Hill

More information

Running head: CPPS REVIEW 1

Running head: CPPS REVIEW 1 Running head: CPPS REVIEW 1 Please use the following citation when referencing this work: McGill, R. J. (2013). Test review: Children s Psychological Processing Scale (CPPS). Journal of Psychoeducational

More information

PLEASE SCROLL DOWN FOR ARTICLE

PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by: [Brown University] On: 3 March 2009 Access details: Access Details: [subscription number 784168974] Publisher Psychology Press Informa Ltd Registered in England and Wales

More information

Table 1: Summary of measures of cognitive fatigability operationalised in existing research.

Table 1: Summary of measures of cognitive fatigability operationalised in existing research. Table 1: Summary of measures of cognitive fatigability operationalised in existing research. Candidate Mmeasures Studies Procedure Self-reported fatigue measure Key Findings The auditory As and auditory

More information

International Journal of Forensic Psychology Copyright Volume 1, No. 3 SEPTEMBER 2006 pp

International Journal of Forensic Psychology Copyright Volume 1, No. 3 SEPTEMBER 2006 pp International Journal of Forensic Psychology Copyright 2006 Volume 1, No. 3 SEPTEMBER 2006 pp. 29-37 The Word Memory Test and the One-in-Five-Test in an Analogue Study with Russian Speaking Participants

More information

What s Wrong With My Client: Understanding Psychological Testing in Order to Work Effectively With Your Expert

What s Wrong With My Client: Understanding Psychological Testing in Order to Work Effectively With Your Expert What s Wrong With My Client: Understanding Psychological Testing in Order to Work Effectively With Your Expert Common Standardized, Norm Referenced Psychological Tests: Diagnostic ( Personality ) Tests:

More information

Determining causation of traumatic versus preexisting. conditions. David Fisher, Ph.D., ABPP, LP Chairman of the Board PsyBar, LLC

Determining causation of traumatic versus preexisting. conditions. David Fisher, Ph.D., ABPP, LP Chairman of the Board PsyBar, LLC Determining causation of traumatic versus preexisting psychological conditions David Fisher, Ph.D., ABPP, LP Chairman of the Board PsyBar, LLC 952 285 9000 Part 1: First steps to determine causation Information

More information

Hofstra Northwell School of Medicine Department of Neurology Epilepsy Fellowship Program. Skills and Competencies Rotation Goals and Objectives

Hofstra Northwell School of Medicine Department of Neurology Epilepsy Fellowship Program. Skills and Competencies Rotation Goals and Objectives Hofstra Northwell School of Medicine Department of Neurology Epilepsy Fellowship Program Skills and Competencies Rotation Goals and Objectives The purpose of the Epilepsy fellowship program is to provide

More information

CONSULTATION / LIAISON PSYCHIATRY

CONSULTATION / LIAISON PSYCHIATRY CONSULTATION / LIAISON PSYCHIATRY Dr. Jon Hunter, MD Tel: 416-586-4800 ext. 4557 Fax: 416-586-5970 Email: jhunter@mtsinai.on.ca Jeanette Villapando Tel: 416-586-4800 ext. 8493 Fax: 416-586-8654 Email:

More information

Clinical Utility of Wechsler Memory Scale-Revised and Predicted IQ Discrepancies in Closed Head Injury

Clinical Utility of Wechsler Memory Scale-Revised and Predicted IQ Discrepancies in Closed Head Injury @ Pergamon Archives of Clinical Neuropsychology, Vol. 12, No. 8, pp. 757 762, 1997 Copyright 1997 Nationaf Academy ofneuropsychology Printed inthe USA, All rights reserved 0887-6177/97$17.00+.00 PIIS0887-6177(97)OO049-8

More information

Use a diagnostic neuropsychology HOW TO DO IT PRACTICAL NEUROLOGY

Use a diagnostic neuropsychology HOW TO DO IT PRACTICAL NEUROLOGY 170 PRACTICAL NEUROLOGY HOW TO DO IT Pract Neurol: first published as 10.1046/j.1474-7766.2003.08148.x on 1 June 2003. Downloaded from http://pn.bmj.com/ Use a diagnostic neuropsychology on 16 October

More information

CRITICALLY APPRAISED PAPER (CAP)

CRITICALLY APPRAISED PAPER (CAP) CRITICALLY APPRAISED PAPER (CAP) Couillet, J., Soury, S., Lebornec, G., Asloun, S., Joseph, P., Mazaux, J., & Azouvi, P. (2010). Rehabilitation of divided attention after severe traumatic brain injury:

More information

COGNITION PART TWO HIGHER LEVEL ASSESSMENT FUNCTIONAL ASSESSMENT

COGNITION PART TWO HIGHER LEVEL ASSESSMENT FUNCTIONAL ASSESSMENT COGNITION PART TWO HIGHER LEVEL ASSESSMENT FUNCTIONAL ASSESSMENT RECAP ON PART ONE BASIC ASSESSMENT Cognitive screening tests are one component of the cognitive assessment process and NOT equivalent to

More information

Critical Review: The Effectiveness of Constraint-Induced Language Therapy in a Distributive Format

Critical Review: The Effectiveness of Constraint-Induced Language Therapy in a Distributive Format Critical Review: The Effectiveness of Constraint-Induced Language Therapy in a Distributive Format Nicole Howell M.Cl.Sc (SLP) Candidate University of Western Ontario: School of Communication Sciences

More information

Detection and diagnosis of malingering in electrical injury

Detection and diagnosis of malingering in electrical injury Archives of Clinical Neuropsychology 20 (2005) 365 373 Detection and diagnosis of malingering in electrical injury Kevin Bianchini a,b, Jeffrey M. Love a,1, Kevin W. Greve a,b,, Donald Adams c Abstract

More information

Neuropsychological Test Development and Normative Data on Hispanics

Neuropsychological Test Development and Normative Data on Hispanics Archives of Clinical Neuropsychology, Vol. 14, No. 7, pp. 593 601, 1999 Copyright 1999 National Academy of Neuropsychology Printed in the USA. All rights reserved 0887-6177/99 $ see front matter PII S0887-6177(99)00008-6

More information

M P---- Ph.D. Clinical Psychologist / Neuropsychologist

M P---- Ph.D. Clinical Psychologist / Neuropsychologist M------- P---- Ph.D. Clinical Psychologist / Neuropsychologist NEUROPSYCHOLOGICAL EVALUATION Name: Date of Birth: Date of Evaluation: 05-28-2015 Tests Administered: Wechsler Adult Intelligence Scale Fourth

More information

Technical Report #2 Testing Children Who Are Deaf or Hard of Hearing

Technical Report #2 Testing Children Who Are Deaf or Hard of Hearing Technical Report #2 Testing Children Who Are Deaf or Hard of Hearing September 4, 2015 Lori A. Day, PhD 1, Elizabeth B. Adams Costa, PhD 2, and Susan Engi Raiford, PhD 3 1 Gallaudet University 2 The River

More information

A THESIS SUBMITTED TO THE GRADUATE DIVISION OF THE UNIVERSITY OF HAWAI I AT MĀNOA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

A THESIS SUBMITTED TO THE GRADUATE DIVISION OF THE UNIVERSITY OF HAWAI I AT MĀNOA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF PRELIMINARY INVESTIGATION OF THE EFFICACY OF CLINICALLY PRACTICAL DUAL-TASK TESTS AS A CONCUSSION ASSESSMENT TOOL: A COMPARISON OF SINGLE- AND DUAL-TASK TESTS ON HEALTHY YOUNG ADULTS A THESIS SUBMITTED

More information