CONTENT ANALYSIS OF COGNITIVE BIAS: DEVELOPMENT OF A STANDARDIZED MEASURE Heather M. Hartman-Hall David A. F. Haaga

Similar documents
FUNCTIONAL CONSISTENCY IN THE FACE OF TOPOGRAPHICAL CHANGE IN ARTICULATED THOUGHTS Kennon Kashima

Cognitive-Behavioral Assessment of Depression: Clinical Validation of the Automatic Thoughts Questionnaire

GOLDSMITHS Research Online Article (refereed)

Looming Maladaptive Style as a Specific Moderator of Risk Factors for Anxiety

The Grateful Disposition: Links to Patterns of Attribution for Positive Events. Sharon L. Brion. Michael E. McCullough. Southern Methodist University

The Science of Psychology. Chapter 1

Need for Cognition: Does It Influence Professional Judgment?

Personality and Social Psychology Bulletin

LAY THEORIES CONCERNING CAUSES AND TREATMENT OF DEPRESSION

Survey Methods in Relationship Research

Assessment: Interviews, Tests, Techniques. Clinical Psychology Lectures

Competency Rubric Bank for the Sciences (CRBS)

10 Intraclass Correlations under the Mixed Factorial Design

Biased Decision-Making Processes in Aggressive Boys Kenneth A. Dodge and Joseph P. Newman

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity

Chapter 6: Attribution Processes

ISC- GRADE XI HUMANITIES ( ) PSYCHOLOGY. Chapter 2- Methods of Psychology

Sexism Predicts Appeal of Gender Stereotypes from a Popular Book on Relationships

Participants. 213 undergraduate students made up the total participants (including the reporter): gender. ethnicity. single/dating/married.

Closed Coding. Analyzing Qualitative Data VIS17. Melanie Tory

This self-archived version is provided for scholarly purposes only. The correct reference for this article is as follows:

PSYCHOMETRIC PROPERTIES OF CLINICAL PERFORMANCE RATINGS

Critical Thinking Assessment at MCC. How are we doing?

INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1

Chapter 11: Behaviorism: After the Founding

Relations between premise similarity and inductive strength

ADMS Sampling Technique and Survey Studies

Qualitative and Quantitative Approaches Workshop. Comm 151i San Jose State U Dr. T.M. Coopman Okay for non-commercial use with attribution

Why Does Similarity Correlate With Inductive Strength?

Rating the construct reliably

Optimal Flow Experience in Web Navigation

Final Exam: PSYC 300. Multiple Choice Items (1 point each)

Measuring and Assessing Study Quality

Supplementary Material for Malle, Knobe, and Nelson (2007). Actor-observer asymmetries in explanations of behavior: New answers to an old question.

SEMINAR ON SERVICE MARKETING

MBTI. Populations & Use. Theoretical Background 7/19/2012

Thank you Dr. XXXX; I am going to be talking briefly about my EMA study of attention training in cigarette smokers.

A Simulation of the Activation- Selection Model of Meaning. Gorfein, D.S. & Brown, V.R.

Introduction. 1.1 Facets of Measurement

SENTENCE COMPLETION TEST FOR DEPRESSION. LONG FORM Version 3.1 SCD-48

The Feature of the Reaction Time for Performing Personality Self-rating 1) : Conditions by Personality Trait Terms and by Sentence

DEVELOPING THE RESEARCH FRAMEWORK Dr. Noly M. Mascariñas

Michael Armey David M. Fresco. Jon Rottenberg. James J. Gross Ian H. Gotlib. Kent State University. Stanford University. University of South Florida

VERDIN MANUSCRIPT REVIEW HISTORY REVISION NOTES FROM AUTHORS (ROUND 2)

Self-Concept By Saul McLeod 2008

Evaluation of Forensic Psychology Reports: The Opinion of Partners and Stakeholders on the Quality of Forensic Psychology Reports

Emotional Valence and Reference Disturbance in Schizophrenia

A Coding System to Measure Elements of Shared Decision Making During Psychiatric Visits

Running Head: BETTER THAN THEM, BUT NOT HIM 1. Why I Think I m Better Than Them, but Not Him. Clayton R. Critcher 1 and David Dunning 2

04/12/2014. Research Methods in Psychology. Chapter 6: Independent Groups Designs. What is your ideas? Testing

Research Methods in Human Computer Interaction by J. Lazar, J.H. Feng and H. Hochheiser (2010)

Prof Debbie Vigar-Ellis QUALITATIVE DATA COLLECTION

CSC2130: Empirical Research Methods for Software Engineering

STEP II Conceptualising a Research Design

Writing Reaction Papers Using the QuALMRI Framework

Pearson Education Limited Edinburgh Gate Harlow Essex CM20 2JE England and Associated Companies throughout the world

Posttraumatic Stress and Attributions in College Students after a Tornado. Introduction. Introduction. Sarah Scott & Lisa Beck

INTRODUCTION. Evidence standards for justifiable evidence claims, June 2016

Research Designs and Methods DANILO V. ROGAYAN JR.

ABSOLUTE AND RELATIVE JUDGMENTS IN RELATION TO STRENGTH OF BELIEF IN GOOD LUCK

Modeling the Influence of Situational Variation on Theory of Mind Wilka Carvalho Mentors: Ralph Adolphs, Bob Spunt, and Damian Stanley

Chapter 11 Nonexperimental Quantitative Research Steps in Nonexperimental Research

Validity and Reliability. PDF Created with deskpdf PDF Writer - Trial ::

PSYCHOLOGY : JUDGMENT AND DECISION MAKING

Biostatistics 2 nd year Comprehensive Examination. Due: May 31 st, 2013 by 5pm. Instructions:

Locus of Control and Psychological Well-Being: Separating the Measurement of Internal and External Constructs -- A Pilot Study

A Behavioral Attention Task for Investigating Rumination in Borderline Personality Disorder: Final Report

Encyclopedia of Psychological Assessment (pp ). Rocio Fernandez-Ballesteros (Ed.). London: Sage, 2002.

INTERVIEWS II: THEORIES AND TECHNIQUES 6. CLINICAL APPROACH TO INTERVIEWING PART 2

Bridging the Gap: Predictors of Willingness to Engage in an Intercultural Interaction

Asking & Answering Sociological Questions

PSYCHOLOGY IAS MAINS: QUESTIONS TREND ANALYSIS

Social Research (Complete) Agha Zohaib Khan

Who Is Rational? Studies of Individual Differences in Reasoning

The influence of (in)congruence of communicator expertise and trustworthiness on acceptance of CCS technologies

AU TQF 2 Doctoral Degree. Course Description

Communicative Competence Scale

Does the Use of Personality Inventory Cause Bias on Assessment Center Results Because of Social Desirability? Yasin Rofcanin Levent Sevinç

Overview of the Logic and Language of Psychology Research

Myers Psychology for AP, 2e

How Surveillance Begets Perceptions of Dishonesty: The Case of the Counterfactual Sinner

Intervening variables of stress, hassles, and health

Genital Stage. Puberty to death. ADOLESCENCE ADULTHOOD SEXUAL INSTINCTS AND SEXUAL CONFLICTS REAPPEAR. FIRST MANIFESTATION

Psychological testing

Psychological Experience of Attitudinal Ambivalence as a Function of Manipulated Source of Conflict and Individual Difference in Self-Construal

Several studies have researched the effects of framing on opinion formation and decision

Biology 321 Lab 1 Measuring behaviour Sept , 2011

Organizing Scientific Thinking Using the QuALMRI Framework

Emotional Intelligence Assessment Technical Report

Work Personality Index Factorial Similarity Across 4 Countries

Career Counseling and Services: A Cognitive Information Processing Approach

Implicit Information in Directionality of Verbal Probability Expressions

RUNNING HEAD: RAPID EVALUATION OF BIG FIVE 1

The Articulated Thoughts in Simulated Situations Paradigm: A Think-Aloud Approach to Cognitive Assessment

Smiley Faces: Scales Measurement for Children Assessment

COACHING I 7. CORE COMPETENCIES

Thank you very much for your guidance on this revision. We look forward to hearing from you.

Necessity, possibility and belief: A study of syllogistic reasoning

I n t r o d u c t i o n

A Component Analysis of Cognitive Behavioral Treatment for Depression

Transcription:

Journal of Rational-Emotive & Cognitive-Behavior Therapy Volume 17, Number 2, Summer 1999 CONTENT ANALYSIS OF COGNITIVE BIAS: DEVELOPMENT OF A STANDARDIZED MEASURE Heather M. Hartman-Hall David A. F. Haaga American University ABSTRACT: The purpose of this research was to develop a standardized content analytic measure of cognitive bias as conceptualized in Beck's (1987) cognitive theory of depression. In a pilot study it was determined that a written stimulus format was preferable to an audiotaped stimulus format with respect to comprehensibility. Valence and expectancy ratings collected in this pilot study also served as the basis for selection of items for the final measure, balancing positive and negative, expected and unexpected events. In study 2 open-ended written responses to questions about the main cause of each event, and the justifications for these attributions, were coded for indices of bias, defined (as in Cook & Peterson, 1986) as justifications that fail to cite covariation of the ascribed cause with the effect. Cognitive bias scores in Study 2 showed internal consistency (positive item-remainder correlations) and high interrater reliability. As predicted, justifications of attributions for expected events were more biased and less rational than were justifications of attributions for unexpected events. Cognitive theory of depression (Beck, 1987) proposes that depression is associated with negative thinking about the self, the world, and the future. This negative thinking is believed to be maintained in part by cognitive biases such as a tendency to overgeneralize the implications of specific negative events. Many studies have corroborated this prediction by finding elevated scores on cognitive bias measures among depressed people (Haaga, Dyck, & Ernst, 1991). These studies have measured cognitive bias via questionnaires (e.g., Smith, O'Keeffe, & Address correspondence to David A. F. Haaga, Department of Psychology, Asbury Building, American University, Washington, DC 20016-8062; email: dhaaga@american.edu. 105 C 1999 Human Sciences Press, Inc.

106 Journal of Rational-Emotive & Cognitive-Behavior Therapy Christensen, 1994), sentence completions (Watkins & Rush, 1983), or articulated thoughts in simulated situations (White, Davison, Haaga, & White, 1992). Each of these measurement methods is useful but has potential limitations. Use of cognition questionnaires requiring endorsement of one or another investigator-provided response alternative runs the risk of providing prompts with which people will agree even if their own spontaneous thoughts differed in theoretically important ways from the a priori-defined choices (Davison, Vogel, & Coffman, 1997). Sentence completion and articulated thoughts measures avoid this potential problem by incorporating unstructured response formats. However, in standard uses of these procedures there is no opportunity to probe responses in order to find out more about the respondent's intended meaning (analogous to the inquiry phase of a Rorschach administration). The absence of such probes may sometimes set constraints on the interpretability of open-ended verbal responses with respect to cognitive variables (Haaga, 1989). A promising alternative method of evaluating cognitive bias was devised by Cook and Peterson (1986). Their method involved content analysis of interview transcripts. Depressed and nondepressed women were to describe negative events they had experienced in the past year, then to identify the major cause of the event. Respondents were queried further regarding the justification for these causal attributions, i.e., why they perceived the named cause to be the cause of the specific negative event under discussion. These justifications provided the critical material for independent coders' evaluations of cognitive bias. In particular, if the justification cited covariation evidence ("evidence that the attributed cause covaries with the event of concern"; Cook & Peterson, 1986, p. 294), this was taken to be a rational justification. Justifications failing to cite covariation data were coded as irrational, and these included the various types of cognitive bias (selective abstraction, overgeneralization, arbitrary inference, etc.) described by Beck. Although Cook and Peterson's method of content analyzing interviews for indices of cognitive bias has commendable features such as open-ended responding and follow-up probes, it has two noteworthy limitations. First, the exclusive focus on negative events limits content validity. Later research with the interview method suggested that this is an important limitation. In particular, diagnostic group differences in cognitive bias interacted with situational valence, such that depressed people were more biased in justifying their attributions for negative

Heather M. Hartman-Hall and David A. F. Haaga 107 events (replicating Cook & Peterson, 1986) but less biased in justifying attributions for positive events (McDermut, Haaga, & Bilek, 1997). Second, by focusing the assessment of cognition on recent actual events, the interview method maximizes realism but sacrifices standardization. That is, any associations between cognitive bias and individual-difference variables such as personality traits are ambiguous in that they could be ascribed to differences in cognitive processing (as intended) or to differences in the nature of events experienced by people scoring high vs. low on the pertinent personality measure. STUDY 1 Based upon these considerations, the research reported in this article was conducted to develop a standardized test of cognitive bias, based on the interview method used by Cook and Peterson (1986) and by McDermut et al. (1997) but with the same stimulus events for all respondents. Our first study was a pilot investigation used to determine the method of stimulus presentation (written or audiotaped) to be used in the cognitive bias test, as well as to select specific items. Content validity was taken into account in that we aimed to balance the test content with respect to valence (positive vs. negative) and predictability (expected vs. unexpected events). People are less likely to undergo extensive attributional search when an event is expected (Kanazawa, 1992), and therefore respondents may be less likely to be able to provide a rational justification for their attributions for expected events. Method Participants. Participants were 48 undergraduate psychology students, who earned partial course credit in return for participating in the study. The sample was predominantly Caucasian (90%), never-married (98%), female (69%), and young (mean age = 20.6 years). Materials, Two types of stimulus materials were evaluated. First, about one-half of the participants (re = 23) were presented with written descriptions of hypothetical events. Twenty events were tested, with each of two subgroups (n = 11 and n = 12, respectively) responding to a different, randomly selected, subset of ten events. The

108 Journal of Rational-Emotive & Cognitive-Behavior Therapy 20 events consisted of five of each of four types of events: positive expected, positive unexpected, negative expected and negative unexpected. Each event description consisted of several sentences describing both the situation that immediately preceded the event and the event itself. Many of the hypothetical events were based upon incidents described by participants in the McDermut et al. (1997) study. Participants were asked to imagine that they were actually experiencing the event described and to respond in writing to two questions: "What would you identify as the major cause of this event?" and "What makes you think that the cause listed above is the major cause of this event?" Participants were then asked to rate the nature of the event on a 7-point Likert-type scale (1 = very negative event, 7 = very positive event) as well as how expected the event would be (1 = totally unexpected event, 7 = totally expected event). Second, the other one-half of participants (n = 25) responded to audiotaped depictions of hypothetical events. Participants listened to recorded role-plays of the same events as were presented in written form in the other study condition. Again, two subgroups (n's = 12 and 13, respectively) each evaluated ten events. After each event was depicted on tape, the participant responded in writing to the same four questions (major cause, justification of causal attribution, valence rating, expectancy rating) listed earlier. Results and Discussion The format involving written stimuli proved more comprehensible to participants and was therefore selected for use in Study 2. Three participants in the audiotape condition expressed (in written comments at the end of the study) confusion as to how they were supposed to respond, whereas no one in the written-stimulus condition did so. Selection of which events to retain for the final written-stimulus measure to be evaluated in Study 2 was based upon valence and expectancy ratings. Twelve event descriptions 1 (three of each type) were selected for the final measure (see Table 1 for valence and expectancy data, based on the written-stimulus condition responses only). For an item to be included in the final measure, its average ratings on both valence and expectancy had to be on the correct side of the scale midpoint. For example, an event intended to be positive and unexpected would have to yield an average valence rating over 4.00 and an averl The complete event descriptions used in this measure can be obtained from the corresponding author.

Heather M. Hartman-Hall and David A. F. Haaga 109 Table 1 Average Valence and Expectedness Ratings of Hypothetical Events Selected for Final Measure Event Positive Expected: Getting a raise after good review Enjoying trip with friend Romantic partner wanting to commit Positive Unexpected: Winning money at casino Elected as club officer Recommended for honors course Negative Expected: Doing poorly in class Romantic partner ending distant relationship Negative Unexpected: Friend canceling plans Romantic partner ending close relationship Having problems with roommate a Not getting job offer mean Valence age expectancy rating under 4.00. When more than three items of a given type met this standard, those with less variable valence and expectancy ratings were selected for the final measure. SD 6.67.49 6.75.62 6.09 1.14 5.25 1.42 6.00 6.00.85 1.00 1.58.51 2.83 1.40 2.91.94 2.33.98 3.08.67 2.82.87 Expectedness mean 4.92 5.83 4.91 2.33 2.83 3.91 4.58 5.08 3.36 2.83 4.58 3.91 SD 1.16.83 1.14 1.44 1.40 1.64 1.93.90 1.21 1.99.67 1.51 Note. Valence ratings were made on a 1 (Very Negative) to 7 (Very Positive) scale. Expectedness ratings were based on a 1 (Totally unexpected) to 7 (Totally expected) scale. This item was rewritten and included in Study 2 as a negative expected event. STUDY 2 The 12-item written test developed in Study 1 was further evaluated along several lines in a new sample. First, we evaluated the interrater reliability of rationality coding of participants' justifications for their

110 Journal of Rational-Emotive & Cognitive-Behavior Therapy causal attributions for the events. Second, we evaluated internal consistency of the four (crossing valence and expectancy) subscales by computing item-remainder correlations for rationality ratings on each item. Finally, we made a first step at evaluating construct validity by comparing mean rationality ratings as a function of situation valence and expectancy. Based upon research by Kanazawa (1992) showing less spontaneous attributional activity after expected events, we reasoned that a valid measure of the rationality of participants'justifications for causal attributions should reveal less rational justifications for attributions for expected events. Method Participants. Participants were 90 undergraduate students, who received partial course credit in psychology courses in return for participating in the study. As in Study 1, the sample was mostly young (mean age was 21.1 years), female (84%), Caucasian (77%), and never married (94%). Materials. The 12-item test developed in Study 1 was completed. Participants responded to three each of four types of events: positive expected, positive unexpected, negative expected, and negative unexpected. After each event description, participants were asked to respond to two written questions: "What would you identify as the major cause of this event?" and "What makes you think that the cause listed above is the major cause of this event?" Three independent raters who were masked as to the hypotheses of the study coded each justification on a scale from 1 (Clearly Irrational) to 5 (Clearly Rational). Judgements about rationality were made according to a modified version of the rating scheme developed by Mc- Dermut et al. (1997). This coding system was based on Kelley's (1973) definition of rational attributional processes. A rational justification was defined as one that cited covariation information. Also rated as rational were justifications in cases where covariation information was not available, but justifications were clearly rational (for example, justifications that cited a knowledgeable authority). As in Cook and Peterson (1986) and McDermut et al. (1997), justifications not meeting these standards were considered irrational. Irrational justifications included those providing no specific basis for the selected attribution (e.g., simply restating the attribution or claiming to "just know" that the hypothesized cause was the actual cause), as well as justifications

Heather M. Hartman-Hall and David A. F. Haaga 111 reflecting specific cognitive biases (selective abstraction, overgeneralization, arbitrary inference, etc.) described in cognitive theory of depression. Raters practiced making these judgements on the responses from the pilot data in Study 1 during the training phase. 2 RESULTS AND DISCUSSION Interrater reliability of judgments of the rationality of justifications for attributions was high. Using Winer's (1971) intraclass correlation equation, as recommended in Bartko (1976), the reliability correlation was r 3 =.85. To test internal consistency of the rationality ratings, item-remainder correlations were computed for each item within each of the four event types (negative expected, negative unexpected, positive expected, and positive unexpected). All of these 12 item-remainder correlations were positive (r's > =.18), and 11 of the 12 were significant (see Table 2). To test the effects of expectancies and situation valence on rationality, a repeated measures 2x2 ANOVA (positive/negative events X expected/unexpected events) was conducted (average rationality ratings in each cell are listed in Table 3). A significant main effect for expectedness was found, such that rationality ratings were lower (more biased) in expected events, F (1, 89) = 5.43, p <.03. The main effect of situation valence was nonsignificant, F (1, 89) = 0.70, p >.4, as was the interaction of situation valence and expectedness, F (1, 89) = 3.49, p <.07. GENERAL DISCUSSION Content analysis methods involve inferring psychological states from participants' written or spoken words. Such methods have been used to study a wide range of personality-relevant variables (Smith, 1992) and may be especially well-suited to assessment of cognition inasmuch as language provides a major set of clues to individuals' conscious thoughts (Lee & Peterson, 1997). The purpose of this study was to develop an assessment device useful in future research on Beck's (1987) theory that depressed people make processing errors when interpreting events. As expected, ratio- 2 The coding manual can be obtained from the corresponding author.

112 Journal of Rational-Emotive & Cognitive-Behavior Therapy Table 2 Item-Remainder Correlations of Rationality Ratings for Events Positive Expected Getting a raise after good review Enjoying trip with friend Romantic partner wanting to commit Positive Unexpected Winning money at casino Elected as club officer Recommended for honors course Negative Expected Doing poorly in class Romantic partner ending distant relationship Having problems with roommate Negative Unexpected Friend canceling plans Romantic partner ending close relationship Not getting job offer.22*.33*.35*.23*.18.27*.21*.21*.25*.24*.31*.24* Note. *p <.05 nality ratings for justifications of causal attributions were higher for unexpected than for expected events. This is consistent with basic research on attributional processing and expectancies and tends to support the validity of the content analytic measure. Reliability evidence (internal consistency, interrater reliability) was also favorable. The method described in this article enjoys some of the advantages of content analysis methods in general, such as not forcing respondents to select from a few predetermined response options. By the same token, its topical focus, rationality vs. bias in attributional justifications, is relatively narrow by comparison to some more exploratory applications of content analysis. As such, this method is responsive to the suggestion by McAdams and Zeldow (1993) that researchers using content analysis "err on the side of depth over breadth" (p. 244). Future research could employ this measure in theoretically relevant individual-differences studies, such as linking rationality of justifications for causal attributions with current or past major depression.

Heather M. Hartman-Hall and David A. F. Haaga 113 Table 3 Average Rationality Ratings for Justifications of Attributions Type of Event Positive Expected Positive Unexpected Negative Expected Negative Unexpected Mean 2.44 2.49 2.27 2.53 Note. N = 90. Each mean is the average across participants of a 3-item average of cognitive bias ratings (1 = clearly irrational/biased, 5 = clearly rational). SD.87.73.79.82 REFERENCES Bartko, J. J. (1976). On various intraclass correlation reliability coefficients. Psychological Bulletin, 83, 762-765. Beck, A. T. (1987). Cognitive models of depression. Journal of Cognitive Psychotherapy, 1, 5-37. Cook, M. L., & Peterson, C. (1986). Depressive irrationality. Cognitive Therapy and Research, 10, 293-298. Davison, G. C., Vogel, R. S., & Coffman, S. G. (1997). Think-aloud approaches to cognitive assessment and the articulated thoughts in simulated situations paradigm. Journal of Consulting and Clinical Psychology, 65, 950-958. Haaga, D. A. F. (1989). Articulated thoughts and endorsement procedures for cognitive assessment in the prediction of smoking relapse. Psychological Assessment, 1, 112-117. Haaga, D. A. F., Dyck, M. J., & Ernst, D. (1991). Empirical status of cognitive theory of depression. Psychological Bullet in, 110, 215-236. Kanazawa, S. (1992). Outcome or expectancy? Antecedent of spontaneous causal attribution. Personality and Social Psychology Bulletin, 18, 659-668. Kelley, H. H. (1973). The process of causal attribution. American Psychologist, 28, 107-128. Lee, F., & Peterson, C. (1997). Content analysis of archival data. Journal of Consulting and Clinical Psychology, 65, 959-969. McAdams, D. P., & Zeldow, P. B. (1993). Construct validity and content analysis. Journal of Personality Assessment, 61, 243-245. McDermut, J. F., Haaga, D. A. F., & Bilek, L. A. (1997). Cognitive bias and irrational beliefs in major depression and dysphoria. Cognitive Therapy and Research, 21, 459-476.

114 Journal of Rational-Emotive & Cognitive-Behavior Therapy Smith, C. P. (Ed.) (1992). Motivation and personality: Handbook of thematic content analysis. New York: Cambridge University Press. Smith, T. W., O'Keeffe, J. L., & Christensen, A. J. (1994). Cognitive distortion and depression in chronic pain: Association with diagnosed disorders. Journal of Consulting and Clinical Psychology, 62, 195-198. Watkins, J. T., & Rush, A. J. (1983). Cognitive response test. Cognitive Therapy and Research, 7, 425-436. White, J., Davison, G. C., Haaga, D. A. F., & White, K. (1992). Cognitive bias in the articulated thoughts of depressed and nondepressed psychiatric patients. Journal of Nervous and Mental Disease, 180, 77-81. Winer, B. J. (1971). Statistical principles in experimental design (2nd edn.). New York: McGraw-Hill Book Co.