Effects of Coaching on Detecting Feigned Cognitive Impairment with the Category Test

Similar documents
Performance profiles and cut-off scores on the Memory Assessment Scales

Factors Influencing the Face Validity of Effort Tests: Timing of Warning and Feedback

Head injury and the ability to feign neuropsychological deficits

Discriminant Function Analysis of Malingerers and Neurological Headache Patients Self- Reports of Neuropsychological Symptoms

THE VALIDITY OF THE LETTER MEMORY TEST AS A MEASURE OF MEMORY MALINGERING: ROBUSTNESS TO COACHING. A dissertation presented to.

Information/Orientation Subtest of the Wechsler Memory Scale-Revised as an Indicator of Suspicion of Insufficient Effort*

Clinical Utility of Wechsler Memory Scale-Revised and Predicted IQ Discrepancies in Closed Head Injury

Chapter 23. Detection of Children s Malingering on Raven s Standard Progressive Matrices*

The vulnerability to coaching across measures of malingering

Comparison of Predicted-difference, Simple-difference, and Premorbid-estimation methodologies for evaluating IQ and memory score discrepancies

The Albany Consistency Index for the Test of Memory Malingering

Ecological Validity of the WMS-III Rarely Missed Index in Personal Injury Litigation. Rael T. Lange. Riverview Hospital.

Theses, Dissertations, Professional Papers

Potential for interpretation disparities of Halstead Reitan neuropsychological battery performances in a litigating sample,

Test review. Comprehensive Trail Making Test (CTMT) By Cecil R. Reynolds. Austin, Texas: PRO-ED, Inc., Test description

SYMPTOM VALIDITY TESTING OF FEIGNED DISSOCIATIVE AMNESIA: A SIMULATION STUDY

Improving the Methodology for Assessing Mild Cognitive Impairment Across the Lifespan

The Delis-Kaplan Executive Functions System Tower Test Resilience to Response Bias

Qualitative scoring of the Rey 15-Item Memory Test in a forensic population

Elderly Norms for the Hopkins Verbal Learning Test-Revised*

The unexamined lie is a lie worth fibbing Neuropsychological malingering and the Word Memory Test

Diagnostic accuracy of the Structured Inventory of Malingered Symptomatology (SIMS) in detecting instructed malingering

Criterion validity of the California Verbal Learning Test-Second Edition (CVLT-II) after traumatic brain injury

DETECTION OF MALINGERING ON RAVEN'S STANDARD PROGRESSIVE MATRICES AND THE BOOKLET CATEGORY TEST DISSERTATION. Presented to the Graduate Council of the

Using Neuropsychological Experts. Elizabeth L. Leonard, PhD

Effects of severe depression on TOMM performance among disability-seeking outpatients

Interpretive Report. Client Information

THE WORD READING TEST OF EFFORT IN ADULT LEARNING DISABILITY: A SIMULATION STUDY

Neuropsychological Performance in Cannabis Users and Non-Users Following Motivation Manipulation

Commentary on Delis and Wetter, Cogniform disorder and cogniform condition: Proposed diagnoses for excessive cognitive symptoms

The Use of Significant Others to Enhance the Detection of Malingerers From Traumatically Brain-Injured Patients

The Test of Memory Malingering (TOMM): normative data from cognitively intact, cognitively impaired, and elderly patients with dementia

What s Wrong With My Client: Understanding Psychological Testing in Order to Work Effectively With Your Expert

Donald A. Davidoff, Ph.D., ABPDC Chief, Neuropsychology Department, McLean Hospital Assistant Professor of Psychology, Harvard Medical School

International Journal of Forensic Psychology Copyright Volume 1, No. 3 SEPTEMBER 2006 pp

The Repeatable Battery for the Assessment of Neuropsychological Status Effort Scale

The Role of Modeling and Feedback in. Task Performance and the Development of Self-Efficacy. Skidmore College


Exaggerated MMPI-2 symptom report in personal injury litigants with malingered neurocognitive deficit

Use of the Booklet Category Test to assess abstract concept formation in schizophrenic disorders

Base Rates of Impaired Neuropsychological Test Performance Among Healthy Older Adults

Wisconsin Card Sorting Test Performance in Above Average and Superior School Children: Relationship to Intelligence and Age

E. Miriam Schmitt-Monreal

Interpreting change on the WAIS-III/WMS-III in clinical samples

KEVIN J. BIANCHINI, PH.D., ABPN

Measurement and Classification of Neurocognitive Disability in HIV/AIDS Robert K. Heaton Ph.D University of California San Diego Ancient History

The Traumatic Events Inventory: Preliminary Investigation of a New PTSD Questionnaire

The significance of sensory motor functions as indicators of brain dysfunction in children

Several studies have researched the effects of framing on opinion formation and decision

25 HIGHLIGHTS ON THE USE OF THE MMPI/MMPI- 2 IN A NEUROPSYCHOLOGICAL TEST BATTERY 1

Chapter 1 Applications and Consequences of Psychological Testing

Memory Retraining with Adult Male Alcoholics

Optimizing Concussion Recovery: The Role of Education and Expectancy Effects

M P---- Ph.D. Clinical Psychologist / Neuropsychologist

Assessments in Private Practice

THE NEUROPSYCHOLOGY OF POST-POLIO FATIGUE. Richard L. Bruno, Thomas Galski, John DeLuca.

Relationships among postconcussional-type symptoms, depression, and anxiety in neurologically normal young adults and victims of mild brain injury $

WAIS-R Subtest Pattern Clusters in Closed-Head-Injured and Healthy Samples*

Running Head: TRUST INACCURATE INFORMANTS 1. In the Absence of Conflicting Testimony Young Children Trust Inaccurate Informants

APPLYING REACTION TIME (RT) AND EVENT-RELATED POTENTIAL (ERPS) MEASURES TO DETECT MALINGERED NEUROCOGNITIVE DEFICIT

Educational Consultation: An Examination Of Strategies For Dealing With Consultee Resistance. Daniel Lee Randolph University of Southern Mississippi

Neuropsychology, in press. (Neuropsychology journal home page) American Psychological Association

International Journal of Forensic Psychology Copyright Volume 1, No. 2 SEPTEMBER 2004 pp

10/5/2015. Advances in Pediatric Neuropsychology Test Interpretation Part I: Importance of Considering Normal Variability. Financial Disclosures

Memory for Complex Pictures: Development and Validation of Digit Test of Effort

Published online: 12 Dec 2014.

Malingering (AADEP Position Paper) The gross volitional exaggeration or fabrication of symptoms/dysfunction for the purpose of obtaining substantial m

RESEARCH OBJECTIVE(S) List study objectives.

Detecting Malingering in Correctional Settings: A Comparison of Several Psychological Tests

Characterization of the Medical Symptom Validity Test in evaluation of clinically referred memory disorders clinic patients

A Coding System to Measure Elements of Shared Decision Making During Psychiatric Visits

CRITICALLY APPRAISED PAPER

Chapter 02 Developing and Evaluating Theories of Behavior

Comparison of Male and Female Response Behaviour on Minnesota Multiphasic Personality Inventory-2

Published online: 12 Aug 2014.

Exploring the Relationship Between Substance Abuse and Dependence Disorders and Discharge Status: Results and Implications

CRITICALLY APPRAISED PAPER

MMPI-2 short form proposal: CAUTION

Simulated subaverage performance on the Block Span task of the Stanford-Binet Intelligence Scales- Fifth Edition

It s All Relative: How Presentation of Information To Patients Influences Their Decision-Making

An empirical analysis of the BASC Frontal Lobe/Executive Control scale with a clinical sample

Test Reactivity: Does the Measurement of Identity Serve as an Impetus for Identity Exploration?

20. Experiments. November 7,

Determining causation of traumatic versus preexisting. conditions. David Fisher, Ph.D., ABPP, LP Chairman of the Board PsyBar, LLC

Improving Accuracy in the. through the use of Technology

Medical Symptom Validity Test Performance Following Moderate-Severe Traumatic Brain Injury: Expectations Based on Orientation Log Classification

DETECTION OF MALINGERED MENTAL RETARDATION

Chapter 11. Experimental Design: One-Way Independent Samples Design

ADHD Rating Scales Susceptibility to Faking in a College Student Sample

WPE. WebPsychEmpiricist

Why do Psychologists Perform Research?

Cognitive-Behavioral Assessment of Depression: Clinical Validation of the Automatic Thoughts Questionnaire

International Journal of Forensic Psychology Copyright Volume 1, No. 3 SEPTEMBER 2006 pp. 1-21

VALIDATION OF THE MILLER FORENSIC ASSESSMENT OF SYMPTOMS TEST (M- FAST) IN A CIVIL FORENSIC POPULATION

Detection of Malingering in Competency to Stand Trial Evaluations*

TOPF (Test of Pre-Morbid Function)

Domestic Violence Inventory

Neuropsychological Testing (NPT)

Detection and diagnosis of malingering in electrical injury

Everyday Problem Solving and Instrumental Activities of Daily Living: Support for Domain Specificity

Transcription:

Archives of Clinical Neuropsychology, Vol. 15, No. 5, pp. 399 413, 2000 Copyright 2000 National Academy of Neuropsychology Printed in the USA. All rights reserved 0887-6177/00 $ see front matter PII S0887-6177(99)00031-1 Effects of Coaching on Detecting Feigned Cognitive Impairment with the Category Test M. A. DiCarlo and J. D. Gfeller Saint Louis University M. V. Oliveri St. John s Mercy Medical Center In a replication and extension of previous research (Tenhula & Sweet, 1996), the current study investigated the utility of the Category Test (CT) for detecting feigned cognitive impairment. Ninety-two undergraduate participants were randomly assigned to one of three groups and administered the CT. A Coached Simulator group was instructed to simulate cognitive impairment and was provided test-taking strategies to avoid detection. An Uncoached Simulator group was simply instructed to feign impairment. A control group was instructed to perform optimally. In addition, the CT results of 30 traumatic brain injury (TBI) patients were analyzed. The results largely supported the utility of five CT malingering indicators identified by Tenhula and Sweet: (a) number of errors on subtests I and II, (b) number of errors on subtest VII, (c) total CT errors, (d) number of errors on 19 easy items, and (e) number of criteria exceeded. Correct Classification rates of the five indicators for Uncoached Simulators and optimal performance controls ranged from 87% to 98%. Correct Classification rates for the TBI patients ranged from 70% to 100%. In addition, significantly more Coached Simulators were misclassified as nonsimulators on four of the CT malingering indicators, relative to their Uncoached counterparts. A decision rule of 1 error on subtests I and II was consistently the most accurate malingering indicator, regardless of degree of coaching or presence of TBI. This indicator correctly classified 76% of all simulators and 100% of the optimal performance controls and TBI patients. Implications of providing persons with test-taking strategies and the utility of these CT malingering indicators for various populations are discussed. 2000 National Academy of Neuropsychology. Published by Elsevier Science Ltd Keywords: malingering, coaching strategies, category test Reviews of the neuropsychological malingering literature (Franzen, Iverson, & McKracken, 1990; Miller & Miller, 1992; Nies & Sweet, 1994; Rogers, Harrell, & Liff, 1993) indicate that a variety of tests have been used to detect nonoptimal test performance. These tests can be categorized as either assessment instruments designed solely to detect feigned impairment or standard neuropsychological tests that also yield clinically relevant information. Although Rose, Hall, Szalda-Petree, and Bach (1998) question Address correspondence to Margaret A. DiCarlo, Department of Psychiatry, Roger Williams Medical Center, 825 Chalkstone Boulevard, Providence, RI 02908; E-mail: Margaret_DiCarlo@Brown.edu 399

400 M. A. DiCarlo, J. D. Gfeller, and M. V. Oliveri whether standard neuropsychological tests can effectively detect malingered deficits, there are numerous studies indicating that several clinically relevant tests are useful for detecting nonoptimal performance (Bernard, 1991; Gfeller & Cradock, 1998; Goebel, 1983; Iverson & Franzen, 1994; Iverson, Myers, & Adams, 1994; Millis, 1992; Mittenberg, Azrin, Millsaps, & Heilbronner, 1993; Mittenberg, Rotholc, Russell, & Heilbronner, 1996; Trueblood & Schmidt, 1993; Wiggins & Brandt, 1988). One traditional neuropsychological test that has shown promise as a measure of neuropsychological malingering is the Category Test (CT; Defilippis & McCampbell, 1979). A major advantage of using the CT for this purpose is that it is a standardized instrument used frequently in clinical practice that yields clinically relevant information without extending the length or expense of the formal assessment process. The CT measures several cognitive abilities, including deductive reasoning, abstract concept formation, and rule learning (Johnstone, Holland, & Hewett, 1997; Reitan & Wolfson, 1993), with a broad range of item difficulty across subtests that allows for the identification of atypical responding. Tenhula and Sweet (1996) identified several CT Malingering Indicators that accurately classified malingered from nonmalingered test performance, when they compared patterns of CT scores of brain-injured patients with normal undergraduate controls and students instructed to simulate brain injury. The overall classification rates for these indicators included the following: 92% (errors on subtests I and II 1); 90% (errors on subtest VII 5); 90% (errors on easy items 2); 83% (total CT errors 87), and 89% (number of criteria exceeded 1). In support of the notion that those who malinger may demonstrate atypical impairment on simple tasks (Bolter, Picano, & Zych, 1985; Wiggins & Brandt, 1988), Tenhula and Sweet suggested that malingered test performance should be considered if a subject misses an excessive number of infrequently missed items on the CT. In a replication and extension of the work by Tenhula and Sweet (1996), the present study compared CT performance of individuals with known head injury to the CT performance of individuals instructed to simulate cognitive deficits. More specifically, the effectiveness of cutoff scores identified by Tenhula and Sweet (1996) for distinguishing malingered from nonmalingered test performance, and the effects of providing specific coaching strategies to experimental participants who attempted to feign believable cognitive impairment were investigated, utilizing four experimental groups (traumatic brain injury [TBI] patients, normal controls, Coached Simulators, and Uncoached Simulators). It is unclear whether the cutoff scores identified by Tenhula and Sweet (1996) are useful in discriminating coached malingerers, or those individuals provided specific information regarding how to avoid detection, from naive uncoached malingerers simply attempting to feign cognitive deficits without knowledge about how to do so in a convincing manner. Although it is usually not possible to determine just how sophisticated patients are in clinical practice, particularly those pursuing compensation for acquired injuries, there is evidence to suggest that some patients are knowledgeable about the assessment process. For example, attorneys have reported that their clients should be provided information about validity indicators on psychological tests (Wetter & Corrigan, 1995). Lees-Haley (1997) also reported converging evidence that attorneys attempt to influence expert opinions in neuropsychological cases by advising clients on how to respond to psychological tests. The volunteer simulator design is the most common empirical method used to detect malingered neuropsychological test performance, because accurate base rates of malingering in clinical samples are usually unknown. According to this design, researchers instruct neurologically intact volunteers to simulate cognitive deficits. It is interesting to

Coaching and Feigned Impairment with the Category Test 401 note that herein lies the paradox of this methodology, because subjects are asked to comply with instructions to feign disorders in order to study subjects who feign when asked to comply (Rogers & Cavanaugh, 1983). Nonetheless, researchers use clinical decision rules, cutoff scores, and statistical analyses to distinguish simulating subjects from patients and neurologically intact controls. The majority of previous simulation studies have been criticized for the limited information provided volunteers regarding how to perform in a suboptimal fashion. Most researchers instructed their subjects to feign believable deficits and to avoid detection. However, the instructions varied widely across studies. Some researchers provided detailed scenarios of the incidents that precipitated litigation or compensation (Iverson & Franzen, 1994; Wiggins & Brandt, 1988). In one study (Iverson & Franzen, 1994), for example, simulators were told that they were in a car accident, knocked unconscious, and admitted to a hospital for several days. These same subjects were informed that they were involved in litigation to receive compensation for their injuries, and they were instructed to fake the most severe, yet believable, memory problems [they] can, without making it obvious to the psychologist that [they] are faking. In contrast, some researchers simply instructed subjects to perform as though they were seeking compensation for deficits sustained from a head injury (Faust, Hart, & Guilmette, 1988). Recent investigations have examined the effects of coaching simulating subjects with information regarding test-taking strategies, specific disorders, or specific tests. In an analog investigation using college students, Lamb and colleagues (Lamb, Berry, Wetter, & Bear, 1994) reported that providing simulators with detailed information on Closed Head Injury (CHI) and the Minnesota Multiphasic Personality Inventory-Second Edition (MMPI-2) affected scores on clinical and validity scales of this instrument. Specifically, CHI information elevated both clinical- and validity-scale scores for the experimental malingerers, whereas information on the MMPI-2 validity scales lowered both clinical- and validity-scale scores. Thus, coaching appears to have an important impact on simulated deficits of CHI as measured by a self-report instrument. Moreover, these results become even more relevant to neuropsychologists in light of Wetter and Corrigan s (1995) recent survey findings that revealed nearly 50% of the attorneys and over 33% of the law students in their study believed that clients referred for testing should be informed of validity scales on tests prior to an evaluation. Martin, Bolter, Todd, and Gouvier (1991) examined the effects of task instruction on malingered memory performance in another analog investigation. These authors compared the test results from a computerized forced-choice recognition memory test of TBI patients to two groups of undergraduates assigned to a naive malingerer condition or a sophisticated malingerer condition. Naive malingerers were instructed to assume the role of an automobile accident victim who exhibited postconcussive symptoms, and who was currently involved in compensatory litigation. Sophisticated malingerers were provided the same instructions. However, they were also told to perform above chance levels, and they were encouraged to miss more hard items than easy ones to minimize the likelihood of being detected as a malingerer. The results revealed that the instructional set differentiated the sophisticated malingerers from the naive simulators. Specifically, 95% of the naive malingerers scored well below the lowest score from the brain-injured group, consistent with the assumption that people will overestimate the appropriate level of impairment (Brandt, Rubinsky, & Lassen, 1985). In contrast, only 60% of the sophisticated malingerers performed worse than the genuine patients. It appears, therefore, that additional task instruction assisted many of the sophisticated malingerers in their efforts to simulate believable deficits.

402 M. A. DiCarlo, J. D. Gfeller, and M. V. Oliveri Moreover, these authors suggested that analog studies of simulation need to include a sophisticated malingering group in order to enhance their external validity. Frederick and Foster (1991) utilized even more elaborate instructions regarding testtaking strategies in their analog studies of simulated malingering on a forced-choice test of cognitive ability. After providing groups of sophisticated malingerers with information such as, get at least half of the answers correct ; answer the easy ones correctly ; and miss only more difficult problems, some subjects (27%) were able to produce realistic impairment. While specific instruction on how to produce test results reflective of genuine deficits assisted only a minority of subjects in avoiding detection, it is not clear how capable sophisticated malingerers were in determining item difficulty in order to utilize coaching instructions properly. Perhaps these same coaching strategies will be more effective on tasks that permit identification of item difficulty more readily. None the less, in light of previously cited literature on the value of providing subjects information regarding how to feign believable deficits, it appears that instruction or coaching on specific test-taking strategies holds some promise. To address this issue, simulating participants in the current study were divided into two groups, varying in levels of sophistication. One group of Uncoached Simulators was instructed to feign believable cognitive deficits, according to modified written instructions utilized by Tenhula and Sweet (1996). The second group of Coached Simulators received information on test-taking strategies, according to instructions utilized by Frederick and Foster (1991), to improve participants ability to avoid detection. The Uncoached Simulators received some information regarding possible neurobehavioral sequelae of traumatic brain injury. However, unlike the Coached Simulators, these participants were not provided specific information regarding how to malinger in a convincing manner. We hypothesized that Coached Simulators would commit significantly fewer errors than Uncoached Simulators on the CT Malingering Indicators validated by Tenhula and Sweet (1996). We also examined the classification rates for the various CT Malingering Indicators. Furthermore, in response to Nies and Sweet s (1994) criticism of several previous investigations of simulated cognitive impairment (Bernard, 1991; Faust et al., 1988; Heaton, Smith, Lehman, & Vogt, 1978), we included a postexperiment questionnaire to ensure the efficacy of our manipulations and to evaluate strategies employed by participants to feign impairment. Inclusion of this questionnaire is consistent with methodology employed by recent studies within this area of research (Iverson & Franzen, 1994; Millis, 1992; Mittenberg et al., 1993; Tenhula & Sweet, 1996; Trueblood & Schmidt, 1993). METHOD Participants One hundred-twenty-two persons participated in the current study. Ninety-two participants (47 males, 45 females) were recruited from undergraduate courses in psychology, and they received extra credit for their participation. Students with a self-reported history of serious medical illness, closed head injury with loss of consciousness, attention deficit disorder, or a diagnosed learning disability were excluded from the current study. A clinical group of 30 participants (22 males, 8 females) were neurological patients with a recent history of TBI who received neuropsychological services within a midwestern teaching hospital. Specifically, 22 of these patients had recent histories of TBI subsequent to motor vehicle accidents. The remaining 8 patients experienced traumatic brain injuries consequent to falls. The average length of time since injury for the TBI patients

Coaching and Feigned Impairment with the Category Test 403 was approximately 56 days (M 56.20, SD 68.88), and the average Glascow Coma Scale determined upon hospital admission was approximately 12 (M 11.74, SD 3.92). In addition, 90% of the TBI patients experienced loss of consciousness (duration unknown), according to self-report or witness account. The TBI patients were included if they reported that they were not involved in litigation or seeking compensation for their injuries at the time of evaluation, and if they had no significant psychiatric history, substance abuse history, history of pre-existing neurological problems, or history of learning disability. Table 1 presents demographic information regarding age and education for the four groups, as well as estimates of IQ according to the Barona Index of Intelligence (Barona, Reynolds, & Chastain, 1984). Other demographic information obtained from participants and utilized to calculate the Barona Index included race, region of the United States in which they were raised, and area of residence (urban or rural). Eighty-nine percent of the total sample was of Caucasian ethnicity, 7% was Asian, 3% was African American, and the remaining 1% was of some other ethnic origin. Seventy percent of the total sample lived in urban areas, 89% of the sample was raised in the midwestern United States, and the remaining 11% was raised in the western (5%), southern (3%), and northeastern (3%) United States. Measures All participants were administered the Booklet Category Test (Defilippis & Mc- Campbell, 1979), according to standardized instructions. This test is considered to be sensitive to many forms of brain damage (Cullum, Steinman, & Bigler, 1984). In addition, the CT has been demonstrated as an effective measure of concept formation and rule learning (Perrine, 1993), abstract reasoning and problem solving (Lezak, 1995), and various other forms of deductive reasoning (Johnstone et al., 1997). The Booklet CT consists of 208 items grouped into seven subtests that include a broad range of item difficulty. The items consist of various numerical and geometric shapes. The items are presented to participants one at a time. Participants are instructed to decide which number (1 through 4) is suggested by the shapes and figures on each card. Immediate verbal feedback (i.e., correct or incorrect ) is provided to participants after each response so that they may extract the organizing principle or concept that guides each subtest. Whereas the first six subtests each contain a single organizing principle, the seventh subtest is a composite of items taken from previous subtests. Thus, this last subtest contains a memory component of particular relevance to the literature on malingering (Tenhula & Sweet, 1996). TABLE 1 Participant Demographics Age Education IQ (Barona) Group n M (SD) M (SD) M (SD) Coached Simulators 32 19.8 a (2.1) 13.3 (1.4) 104.2 (3.3) Uncoached Simulators 30 20.3 a (4.2) 12.9 (1.1) 101.7 (4.4) Optimal Controls 30 20.0 a (4.1) 13.2 (1.4) 102.6 (4.0) TBI patients 30 32.0 b (12.4) 12.7 (1.7) 102.8 (5.8) Values with different subscripts differ significantly (p.001). TBI Traumatic brain injury.

404 M. A. DiCarlo, J. D. Gfeller, and M. V. Oliveri All student volunteers completed a postexperiment questionnaire as a manipulation check. This questionnaire, a modified version of an instrument designed by Iverson and Franzen (1994), contains questions that assess participants compliance with experimental procedures, level of motivation, strategies for performance, and levels of confidence about their performance. Procedure TBI patients completed the CT as part of a comprehensive consultative evaluation. Student participants were randomly assigned to the following three groups: (1) Control (optimal performance group), (2) Uncoached Simulating group, and (3) Coached Simulating group. Prior to testing, the examiner read aloud the instructions to participants to ensure that the instructions were understood clearly. Participants in the Control group were instructed to perform optimally, and participants in the two simulating groups were given instructions, similar to those utilized by Tenhula and Sweet (1996). Both simulating groups were instructed to perform as though they had sustained a head injury in an accident, and they were trying to receive financial compensation from personal injury litigation. In addition, both simulating groups were instructed to simulate believable cognitive deficits, and not to perform so poorly that it will be obvious to the examiner. Participants in the Coached Simulating group received additional instructions regarding test-taking strategies used by Frederick and Foster (1991). More specifically, these participants were instructed to get at least half of the answers right ; to answer the easy ones right ; and to miss only more difficult problems. The complete set of instructions administered to the Coached Simulating group was read as follows: As part of a research study, you are about to undergo a test that measures problem solving ability. The experimenter will show you a series of cards with different geometric figures and designs. You will be asked to decide for each card whether the picture suggests to you the number 1, 2, 3, or 4. If you do not know the answer, just take a guess. The experimenter will tell you each time whether you are right or wrong. Under normal circumstances this information can be used to figure out a pattern. The experimenter will tell you more about how to do the test when you begin. Your part in this project is to take the test while playing the role of a head injured individual who has suffered a traumatic brain injury. Imagine that you were involved in an accident, in which you received a head injury and lost consciousness. Some individuals who have accidents are normal, or unharmed, following their injury, but they may fake injury to obtain financial rewards. We want to know what this faked performance looks like. In other words, you are to alter your performance to suggest that your behavior has been altered due to brain damage from the accident. People with brain injury often have memory problems, concentration problems, difficulty thinking, and some physical symptoms such as headaches and poor stamina. Your goal is to try to produce the most severely impaired performance that you can, so that the experimenter will NOT know that you are only faking or pretending. Imagine that if you pretend well enough, you will receive a large sum of money in a personal injury lawsuit by claiming that you have problems from the head injury. Remember, you have to be convincing in your performance. This is going to take some skill on your part. You will have to remind yourself throughout the testing what you are trying to do. We want to provide you with the following tips to use throughout the test so that you can make your performance convincing: 1) Get at least one half of the answers correct; 2) Answer the easy items correctly; and 3) Miss only more difficult items. If you use these tips, you should be able to fool the examiner and to act like you have problems from the head injury.

Coaching and Feigned Impairment with the Category Test 405 Do you understand what you are to do? Do you have any questions? So remember, to make your performance convincing you should: 1) Get at least one half of the answers correct; 2) Answer the easy items correctly; and 3) Miss only more difficult items. Please note that after the examiner administers this test to you, you will ask you to rate particular test items on a 5-point scale. In addition, you will be asked to complete a paper-and-pencil questionnaire about this study. Please answer the rating questions and the items on the questionnaire truthfully. Please be as honest and accurate as you can. You will not be asked to fake problems on those scales or on the questionnaire. There s one more thing we want you to know. During the testing, the examiner is not allowed to know what instructions you have received. DO NOT ask questions of the examiner or in any way reveal the nature of the instructions you have been given. You will have an opportunity to ask questions at the end of the session. Trained examiners were aware of the general purpose of the investigation, but they were blind regarding group membership. Each examiner provided the preliminary instructions to a participant in a separate room from the other examiner. Following completion of the initial instructions, the examiners administered the CT to a participant whose group membership was unknown. As indicated previously, student participants completed a postexperiment questionnaire following the administration of the CT. The examiner showed student participants each subtest of the CT again for a brief period of time in order to assess the perceived difficulty of each subtest. Specifically, participants were asked to rate subtest difficulty on a scale presented in a 5-point Likert format. This permitted the examination of whether Coached Simulators could detect accurately subtest difficulty in order to simulate cognitive impairment in a convincing fashion. RESULTS Postexperiment Questionnaire and Manipulation Check Students assigned to either simulating group were included in the current study if they indicated they had feigned impairment as instructed. Upon review, only one individual from the original group of students was excluded from participation by indicating he had not complied with experimental procedures. Furthermore, all participants in the simulating groups answered a question regarding their level of motivation to feign impairment that was presented in a 5-point Likert scale format (i.e., higher scores indicating greater motivation). The mean ratings of level of motivation were 4.31(SD.74) and 4.17 (SD.75) for the Coached Simulators and Uncoached Simulators, respectively. An independent t test indicated that the values for the two simulating groups did not differ significantly, t(60).77, p ns. Thus, simulating participants indicated they were quite motivated to comply with experimental instructions. Student participants assigned to simulating groups also answered questions regarding the various strategies they utilized to feign impairment in a convincing fashion. Chisquare analyses revealed that, compared to Uncoached Simulators, significantly more Coached Simulators reported that they missed only difficult items, 2 (1, N 62) 18.63, p.0001, and answered at least half of the items correctly as instructed, 2 (1, N 62) 11.16, p.0001. Approximately 97% of the Coached Simulators reported that they utilized the various instructed strategies.

406 M. A. DiCarlo, J. D. Gfeller, and M. V. Oliveri Perceived Subtest Difficulty As part of the postexperiment inquiry, student participants rated CT subtest difficulty on a 5-point Likert format scale (i.e., higher scores indicated greater perceived difficulty). This permitted the examination of whether participants could accurately detect subtest difficulty in order to apply the various test-taking strategies provided by Frederick and Foster (1991). A 3 7 mixed design multivariate analysis of variance (MANOVA) with one between-subjects factor (group) and one within-subjects factor (CT subtest difficulty) was conducted. Neither the interaction of group subtest difficulty, F(14, 166) 1.33, p ns, nor the main effect of group, F(2, 89).59, p ns, was significant. However, the main effect of subtest difficulty, F(6, 84) 80.95, p.0001, was significant. Post-hoc t tests were conducted to examine differences in perceived subtest difficulty across the seven CT subtests. A Bonferroni technique was utilized to control for an inflated error rate due to multiple comparisons (a total of 21 comparisons, with.0024). The analyses revealed that CT subtests I and II were rated as significantly less difficult than each of the five remaining CT subtests. In addition, no differences between perceived level of difficulty for CT subtests III through VII were significant. Table 2 presents the mean difficulty ratings for each CT subtest for the three groups of student participants. Performance on CT Planned comparison t tests were performed to examine the prediction that Coached Simulators would commit significantly fewer errors than Uncoached Simulators on the CT Malingering Indicators provided by Tenhula and Sweet (1996). Consistent with our prediction, Coached Simulators committed significantly fewer errors than the Uncoached Simulators on CT subtests I and II (combined), t(41) 4.25, p.0001; CT subtest VII, t(48) 3.88, p.0001; CT total errors, t(47) 4.73, p.0001; and Tenhula and Sweet s (1996) easy items, t(41) 3.68, p.0001. A univariate MANOVA was conducted on the four CT Malingering Indicators validated by Tenhula and Sweet (1996) to examine general differences in performance between the various groups of participants that were not predicted specifically. Results from this analysis revealed a significant multivariate effect for group, F(12, 305) 12.92, p.000, and significant group differences in performance on all four CT Malingering Indicators: CT subtests I and II (combined), F(12, 305) 41.31, p.000; CT subtest VII, F(12, 305) 50.75, p.000; CT total errors, F(12, 305) 49.31, p.000; and Tenhula and Sweet s (1996) easy items, F(12, 305) 33.58, p.000. Table 3 displays the TABLE 2 Mean Ratings of Perceived Difficulty for Category Test (CT) Subtests by Student Participants CT Subtest n M (SD) I 92 1.04 a (.25) II 92 1.20 a (.43) III 92 2.53 b (1.10) IV 92 2.38 b (1.00) V 92 2.59 b (.97) VI 92 2.52 b (.93) VII 92 2.32 ;b (1.02) Values with different subscripts differ significantly (p.001).

Coaching and Feigned Impairment with the Category Test 407 means and standard deviations for the number of errors on the CT Malingering Indicators committed by all four groups of participants. Post-hoc comparison t tests were conducted to examine the differences in performance on all four CT Malingering Indicators between the various groups of simulating and nonsimulating participants, as well as between the two groups of nonsimulating participants (Optimal Controls vs. TBI patients). A Bonferroni correction method was utilized to control for the possibility of committing a Type I error, because these multiple comparisons (a total of 24 comparisons, with.002) were exploratory in nature. Close examination of the pattern of results in Table 3 indicates the following rank ordering among the groups regarding the greatest number of errors committed on several of the CT Malingering Indicators: (a) Uncoached Simulators, (b) Coached Simulators, (c) TBI patients, and (d) Optimal Performance Controls. Overall Classification Accuracy: Simulators and Nonsimulators Calculations were conducted to determine the classification accuracy for simulating and nonsimulating groups, using the CT Malingering Indicators validated by Tenhula and Sweet (1996). Table 4 presents the rates of true-positive classification for the Uncoached and Coached Simulators, as well as the rates of true-negative classification for the Optimal Performance Controls and TBI patients. In addition, frequency calculations were performed on each of the CT Malingering Indicators to determine the overall classification accuracy for all simulating subjects (Uncoached and Coached Simulators collapsed into one group) versus TBI patients. These calculations revealed that a clinical decision rule of 1 error on CT subtests I and II correctly classified 76% of all simulators (true-positive classification) and 100% of the TBI patients (true-negative classification). The rates of true-positive classification for the other CT Malingering Indicators ranged from 54.8% (total CT errors 87) to 80.6% (errors on subtest VII 5); while the rates of true-negative classification for these indicators ranged from 70% (errors on subtest VII 5) to 90% (errors on easy items 2 and number of criteria exceeded 1). Misclassified Simulators: Effects of Coaching Percentage calculations and chi-square analyses were conducted to determine errors in classification accuracy for Uncoached and Coached Simulators according to the Ma- TABLE 3 Mean Number of Errors for the Category Test Malingering Indicators Committed by All Groups Uncoached Simulators (n 30) Coached Simulators (n 32) Optimal Controls (n 30) TBI Patients (n 30) M (SD) M (SD) M (SD) M (SD) Subtests I and II 10.3 a (7.5) 4.0 b (3.5).10 c (.305).10 c (.31) Subtest VII 9.9 a (3.7) 6.8 b (2.3) 2.1 c (1.52) 4.0 d (2.36) Total errors 107.4 a (30.0) 77.5 b (17.9) 35.1 c * (16.7) 55.6 d * (29.4) 19 easy items 7.1 a (4.9) 3.5 b (2.4) 1.3 c (2.0).20 c (0.48) Values with different subscripts differ significantly (p.001). TBI Traumatic brain injury. *p.0025.

408 M. A. DiCarlo, J. D. Gfeller, and M. V. Oliveri TABLE 4 Correct Classification a of Simulators and Nonsimulators According to the Category Test (CT) Malingering Indicators Uncoached Simulators (%) (n 30) Coached Simulators (%) (n 32) Optimal Controls (%) (n 30) TBI Patients (%) (n 30) CT malingering indicators Errors on subtests I and II 1 93.3 59.4 100 100 Errors on subtest VII 5 90 71.9 100 70.0 Total CT errors 87 76.7 34.4 100 83.3 Errors on easy items 2 80 56.3 100 90.0 Number of criteria exceeded 1 93.3 65.6 100 90.0 a Correct classification % of simulators classified as malingering (true positives) and % of nonsimulators classified as not malingering (true negatives). lingering Indicators identified by Tenhula and Sweet (1996). Chi-square analyses revealed that Coached Simulators were misclassified significantly more often than Uncoached Simulators with the application of the following four CT Malingering Indicators: (a) CT subtests I and II combined, (b) CT total errors, (c) Tenhula and Sweet s (1996) easy items, and (d) Number of criteria exceeded. There was no significant difference between the misclassification rates of Coached Simulators and Uncoached Simulators for the number of errors (cutoff 5) on CT Subtest VII. Table 5 displays the percentages of misclassified simulators according to each of the CT Malingering Indicators. Thus, the Coached Simulators appeared to have effectively utilized the information regarding test-taking strategies on four of the five CT Malingering Indicators. TABLE 5 Percentages of Misclassified Simulators a According to Category Test (CT) Malingering Indicators Uncoached Simulators (%) (n 30) Coached Simulators (%) (n 32) Classification Accuracy: Uncoached Simulators Versus Nonsimulators Separate frequency calculations were performed on each of the CT Malingering Indicators to determine the overall classification accuracy for Uncoached Simulators versus the nonsimulating groups combined (i.e., Optimal Performance Controls and TBI patients). The Coached Simulators were excluded from these analyses to compare the cur- Chi- Square CT Malingering Indicators Errors on subtests I and II 1 6.7 40.6 9.74** Errors on subtest VII 5 10.0 28.1 3.26 Total CT errors 87 23.3 65.6 11.18** Errors on easy items 2 20.0 43.8 4.00* Number of criteria exceeded 1 6.7 34.4 7.17* a Percentage of simulators classified as not malingering (false negatives). *p.05. **p.005.

Coaching and Feigned Impairment with the Category Test 409 rent results to those obtained by Tenhula and Sweet (1996), and to examine the effectiveness of the decision rules when applied to participants attempting to feign impairment without the influence of additional instruction in test-taking strategies. In addition, the Coached Simulators were excluded from these analyses because their performance on the CT Malingering Indicators differed significantly from the performance of Uncoached Simulators. The hit rates for the CT Malingering Indicators were comparable to those obtained by Tenhula and Sweet, and they included the following: (a) 97.8% with a cutoff score of 1 error on CT Subtests I and II, (b) 86.7% with a cutoff score of 5 errors on CT Subtest VII, (c) 86.7% with a cutoff score of 87 total CT errors, and (d) 90% with a cutoff score of 2 errors on Tenhula and Sweet s easy items. DISCUSSION The results of this study support the utility of the CT Malingering Indicators identified by Tenhula and Sweet (1996) for distinguishing malingered from nonmalingered test performance. In addition, the results revealed that providing specific instructions regarding test-taking strategies offered by Frederick and Foster (1991) assisted Coached Simulators to feign believable cognitive impairment more effectively than their Uncoached counterparts. Specifically, significantly more Coached Simulators were misclassified as performing optimally on four of the CT Malingering Indicators when compared to the Uncoached Simulators. However, a decision rule of 1 error on CT subtests I and II was consistently the most accurate Malingering Indicator, regardless of subject coaching or presence of traumatic brain injury. More generally, the results of this study add to the growing literature regarding the efficacy of using clinically relevant cognitive measures to assess nonoptimal test performance (Bernard, 1991; Gfeller & Cradock, 1998; Goebel, 1983; Iverson & Franzen, 1994; Iverson et al., 1994; Millis, 1992; Mittenberg et al., 1993, 1996; Trueblood & Schmidt, 1993; Wiggins & Brandt, 1988). Consistent with predictions, providing information regarding test-taking strategies to the Coached Simulators differentially affected their performance on the CT Malingering Indicators reported by Tenhula and Sweet (1996). Coached Simulators committed significantly fewer errors than Uncoached Simulators on the CT Malingering Indicators. Moreover, significantly more Coached Simulators were misclassified as performing optimally by the cutoff scores identified for every CT Malingering Indicator, except for subtest VII, when compared to their Uncoached counterparts. These results indicate that the Coached Simulators were able to use basic test-taking instructions to avoid detection when they were successful at determining the level of subtest or item difficulty. These results are consistent with prior research that examined the effectiveness of subject coaching for avoiding detection of malingered test performance (Frederick & Foster, 1991; Lamb et al., 1994; Martin et al., 1991; Rose et al., 1998). In similar analog investigations, Martin et al. (1991) and Frederick and Foster (1991) demonstrated that specific, yet simple instructional sets resulted in differential performance between coached simulators and uncoached simulators on experimental forced-choice tasks of recognition memory and cognitive ability. Moreover, in response to an assertion made by Nies and Sweet (1994) that malingering litigants may be more sophisticated than college student volunteers, the current study included the additional coaching component to enhance external validity and to better approximate clinical reality. In order for the specific test-taking strategies to be effective, the Coached Simulators in the current study had to perceive variability in subtest difficulty. Analysis of the postexperiment questionnaire indicated that student participants in all three groups rated

410 M. A. DiCarlo, J. D. Gfeller, and M. V. Oliveri CT subtests I and II as less difficult than subtests III through VII. Moreover, approximately 97% of the Coached Simulators reported they had utilized strategies consistent with experimental instructions to avoid detection. It is important to note that although the Coached Simulators committed significantly fewer errors on the CT Malingering Indicators compared to Uncoached Simulators, they performed significantly less well than TBI patients. Thus, providing additional coaching differentiated the Coached Simulators from their Uncoached counterparts, but it did not permit them to approximate the performance of the TBI patients. Correct classification rates for the Uncoached Simulators according to the CT Malingering Indicators ranged from 77% to 93%. These rates are consistent with and slightly higher than the rates reported by Tenhula and Sweet (1996). The cutoff score of 1 error on CT subtests I and II and 1 malingering criteria exceeded yielded the highest true-positive classification rate, while the total number of CT errors yielded the lowest. Virtually all of the Optimal Performance Controls were classified correctly by all five CT Malingering Indicators. Moreover, all of the TBI patients were classified correctly according to the clinical decision rule for CT subtests I and II. However, the four other CT Malingering Indicators yielded true-negative classification rates for the TBI patients that ranged from 70% (errors on CT subtest VII) to 90% (Tenhula and Sweet s 19 easy items and Number of criteria exceeded). Taken together, the current results largely support the utility of Tenhula and Sweet s (1996) CT Malingering Indicators, particularly with persons suspected of feigning impairment who lack information regarding test-taking strategies to avoid detection. The two most effective clinical decision rules were based on subtests judged as easy by experimental participants (CT subtests I and II) and items infrequently failed by most individuals (19 easy items), including brain-injured patients. These findings support the contention that persons who malinger may demonstrate excessive impairment on relatively easy tasks (Bolter et al., 1985; Wiggins & Brandt, 1988). More specifically, these results support Tenhula and Sweet s contention that although the items from CT subtests I and II do not offer helpful clinical information regarding a patient s ability level (Laatsch & Choca, 1991), they do offer useful information regarding potential malingering and also prepare persons for subsequent subtests that are more challenging. In a recent investigation regarding the construct validity of the CT, Johnstone et al. (1997) revealed that CT subtests I and II are conceptually distinct from other CT subtests and from other measures of cognitive ability, such as the Wechsler Adult Intelligence Test-Revised (Wechsler, 1981), the Wechsler Memory Scale-Revised (Wechsler 1987), the Trail Making Test, and the Tactual Performance Test (Lezak, 1995). Through factor analysis, these authors identified a Symbol Recognition/Counting factor comprising CT subtests I and II, distinct from a Spatial Positioning Reasoning factor comprising CT subtests III, IV, and VII, and a Proportional Reasoning factor comprising CT subtests V and VI. Their investigation also supported the relative simplicity of CT subtests I and II, because 306 neurological patients who participated in their study committed, on average, less than 1 error combined on both subtests. While the clinical decision rule for CT subtests I and II yielded no false-positive errors for either of the nonsimulating groups, the remaining CT Malingering Indicators yielded false-positive rates that ranged from 10% to 30% for the TBI patients. Iverson et al. (1994) reported relatively similar false-positive classification rates for the CT Malingering Indicators they attempted to cross-validate on a large sample of patients with closed head injuries. They reported a 34% false-positive rate for 5 errors on CT subtest VII and a 21% false-positive rate for 87 CT total errors, compared to the current 30% and 17% false-positive rates, respectively. However, they reported a 12% false-

Coaching and Feigned Impairment with the Category Test 411 positive rate for 1 error on CT subtests I and II, while the current study and Tenhula and Sweet (1996) had no false-positive classifications for this CT Malingering Indicator. Iverson and colleagues did not report the level of neuropsychological impairment of their clinical sample. However, they indicated that at least 28% of their sample had a severe head injury, as defined by abnormal neuroradiological findings. The severity of cognitive impairment was not analyzed in either Tenhula and Sweet s (1996) research or the current investigation. Therefore, one can only speculate that severity of head injury may explain, at least in part, these discrepant findings and may be an important variable in predicting non-optimal test performance on the CT. The overall classification rates of the CT Malingering Indicators in the current study varied as a function of whether participants received coaching regarding test-taking strategies. When the CT performance of Uncoached Simulators was compared to that of the Optimal Performance Controls and TBI patients, the hit rates for the five CT Malingering Indicators ranged from an impressive 87% to 98%. However, the hit rates for the CT Malingering Indicators ranged from approximately 70% to 87% when both groups of simulators (Coached and Uncoached) were compared to Optimal Performance Controls, and from approximately 64% to 84% when all simulators were compared to the TBI patients. Once again, the cutoff score of 1 error on CT subtests I and II was the most effective at differentiating the simulators from the non-simulators in all sets of comparison. Furthermore, a cutoff score of 87 CT total errors was consistently the least accurate decision rule, regardless of degree of coaching or presence of TBI. A discriminant function, derived by Mittenberg et al. (1996) to effectively identify nonoptimal test performance, included total number of CT errors and other test indicators from the Halstead-Reitan Battery (Reitan & Wolfson, 1993). Considering the current findings, specific subtest performance may yield even more accurate rates of classification when combined with other neuropsychological tests that are sensitive to feigned cognitive impairment. The various limitations of the current study merit brief discussion. For example, it is unclear whether the findings of this analog investigation generalize to actual clinical contexts involving litigating individuals feigning impairment. However, Mittenberg et al. (1996) offered support for the external validity of utilizing normal simulators through the accurate identification of several groups of clinical malingerers from published data. Additionally, the current study excluded TBI patients with either pre-injury neurological problems or significant difficulties with substance-abuse. Such pre-injury factors are relatively common in clinical samples of TBI patients. In fact, such factors would likely contribute to more pronounced neuropsychological impairment. Therefore, although the severity of head injury was not directly controlled for or considered in the current analyses, it is possible that varying degrees of severity or the presence of additional preinjury factors would differentially affect error patterns on the CT. In summary, all five CT Malingering Indicators yielded overall classification rates substantially higher than chance when the CT performance of Uncoached Simulators was compared to Optimal Performance Controls and TBI patients. However, only the CT subtests I and II errors 1 Indicator remained reasonably effective at differentiating feigned from optimal test performance when the CT performance of the Coached Simulators was considered. All other CT Malingering Indicators yielded clinically problematic rates of misclassification. The specific test-taking instructions provided to the Coached Simulators was included to enhance external validity, because in clinical practice it is usually not possible to determine the level of sophistication or knowledge of the assessment process on the part of patients, particularly those pursuing compensation for acquired injuries. Therefore, responsible and conservative clinical practice benefits most

412 M. A. DiCarlo, J. D. Gfeller, and M. V. Oliveri from the use of the decision rule for CT subtests I and II, with secondary or convergent support from the other CT Malingering Indicators. Future research should investigate the effectiveness of combining the various CT Malingering Indicators with other sensitive neuropsychological tests to determine if the classification rates improve upon those demonstrated in the current study. Replicating the current findings on independent samples of suspected malingerers would be helpful to further determine their generalizability. It will also be important to examine the false-positive error rates in clinical samples with varying levels of impairment and a range of disorders with known neurobehavioral sequelae. Furthermore, it will be important to investigate the effects of providing test-taking strategies on other standardized instruments used frequently in clinical practice to detect nonoptimal test performance and to measure various cognitive abilities. REFERENCES Barona, A., Reynolds, C. R., & Chastain, R. (1984). A demographically based index of premorbid intelligence for the WAIS-R. Journal of Consulting and Clinical Psychology, 52, 885 887. Bernard, L. C. (1991). The detection of faked deficits on the Rey Auditory Verbal Learning Test: The effect of serial position. Archives of Clinical Neuropsychology, 6, 81 88. Bolter, J. F., Picano, J. J., & Zych, K. (1985). Item error frequency on the Halstead Category Test: An index of performance validity. Paper presented at the annual meeting of the National Academy of Neuropsychology, Philadelphia, PA. Brandt, J., Rubinsky, E., & Lassen, G. (1985). Uncovering malingered amnesia. Annals of the New York Academy of Sciences, 44, 502 503. Cullum, C. M., Steinman, D. R., & Bigler, E. D. (1984). Relationship between fluid and crystallized cognitive functions using Category Test and WAIS scores. International Journal of Clinical Neuropsychology, 6, 172 174. Defilippis, N. A., & McCampbell E. (1979). The Booklet Category Test: Research and Clinical Form: Manual. Odessa, FL: Psychological Assessment Resources, Inc. Faust, D., Hart, K., & Guilmette, T. J. (1988). Pediatric malingering: The capacity of children to fake believable deficits on neuropsychological testing. Journal of Consulting and Clinical Psychology, 56, 578 582. Franzen, M. D., Iverson, G. L., & McKracken, L. M. (1990). The detection of malingering in neuropsychological assessment. Neuropsychology Review, 1, 247 279. Frederick, R., & Foster, H. (1991). Multiple measures of malingering on a forced-choice test of cognitive ability. Psychological Assessment: A Journal of Consulting and Clinical Psychology, 3, 596 602. Gfeller, J. D., & Cradock, M. (1998). Detecting feigned neuropsychological impairment with the Seashore Rhythm Test. Journal of Clinical Psychology, 54, 431 438. Goebel, R. (1983). Detection of faking on the Halstead-Reitan Neuropsychological Test Battery. Journal of Clinical Psychology, 39, 731 742. Heaton, R. K., Smith, H. H., Jr., Lehman, R. A. W., & Vogt, A. T. (1978). Prospects for faking believable deficits on neuropsychological testing. Journal of Consulting and Clinical Psychology, 46, 892 900. Iverson, G. L., & Franzen, M. D. (1994). The Recognition Memory Test, Digit Span, and Knox Cube Test as markers of malingered memory impairment. Assessment, 1, 323 334. Iverson, G. L., Myers, B., & Adams, R. (1994). Specificity of the Category Test for detecting malingering. Poster presented at the annual meeting of the National Academy of Neuropsychology, Orlando, FL. Johnstone, B., Holland, D., & Hewett, J. E. (1997). The construct validity of the Category Test: Is it a measure of reasoning or intelligence? Psychological Assessment, 9, 28 33. Laatsch, L., & Choca, J. (1991). Understanding the Halstead Category Test by using item analysis. Psychological Assessment: A Journal of Consulting and Clinical Psychology, 3, 701 704. Lamb, D. G., Berry, D. T. R., Wetter, M. W., & Bear, R. A. (1994). Effects of two types of information on malingering of closed head injury on the MMPI-2: An analog investigation. Psychological Assessment, 6, 8 13. Lees-Haley, P. (1997). Attorneys influence expert evidence in forensic psychological and neuropsychological cases. Assessment, 4, 321 324. Lezak, M. D. (1995). Neuropsychological assessment (3rd ed.). New York: Oxford University Press. Martin, R., Bolter, J., Todd, M., & Gouvier, W. D. (1991). Effects of sophistication and motivation on the