The Lens Model and Linear Models of Judgment

Similar documents
History of JDM Linear Judgment Models

Psychology 466: Judgment & Decision Making

Introduction to Preference and Decision Making

CHAPTER 3 DATA ANALYSIS: DESCRIBING DATA

MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES OBJECTIVES

Results & Statistics: Description and Correlation. I. Scales of Measurement A Review

Regression CHAPTER SIXTEEN NOTE TO INSTRUCTORS OUTLINE OF RESOURCES

Big data in educational science: Meta-analysis as an analysis tool.

3 CONCEPTUAL FOUNDATIONS OF STATISTICS

ISC- GRADE XI HUMANITIES ( ) PSYCHOLOGY. Chapter 2- Methods of Psychology

Gut Feelings: Short Cuts To Better Decision Making Gerd Gigerenzer

Variability. After reading this chapter, you should be able to do the following:

SPSS Correlation/Regression

CAUTIONS ABOUT THE PRACTICE EXAM

Statistics for Psychology

Still important ideas

Why (and how) Superman hides behind glasses: the difficulties of face matching

Dear Participants in the Brunswik Society

Sawtooth Software. The Number of Levels Effect in Conjoint: Where Does It Come From and Can It Be Eliminated? RESEARCH PAPER SERIES

Bayes theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event.

Example of Interpreting and Applying a Multiple Regression Model

Regression. Lelys Bravo de Guenni. April 24th, 2015

Empirical Knowledge: based on observations. Answer questions why, whom, how, and when.

Chapter 13 Estimating the Modified Odds Ratio

The Regression-Discontinuity Design

CRITERIA FOR USE. A GRAPHICAL EXPLANATION OF BI-VARIATE (2 VARIABLE) REGRESSION ANALYSISSys

MITOCW conditional_probability

15.301/310, Managerial Psychology Prof. Dan Ariely Recitation 8: T test and ANOVA

Reliability and Validity

Herbert A. Simon Professor, Department of Philosophy, University of Wisconsin-Madison, USA

What Are Your Odds? : An Interactive Web Application to Visualize Health Outcomes

20. Experiments. November 7,

Chapter 11 Nonexperimental Quantitative Research Steps in Nonexperimental Research

9 research designs likely for PSYC 2100

Bayes Theorem Application: Estimating Outcomes in Terms of Probability

Psychology of Perception Psychology 4165, Fall 2001 Laboratory 1 Weight Discrimination

Lesson 9: Two Factor ANOVAS

Basic Concepts in Research and DATA Analysis

Statistics for Psychology

Study Guide #2: MULTIPLE REGRESSION in education

EIQ16 questionnaire. Joe Smith. Emotional Intelligence Report. Report. myskillsprofile.com around the globe

Understanding Uncertainty in School League Tables*

observational studies Descriptive studies

Readings: Textbook readings: OpenStax - Chapters 1 13 (emphasis on Chapter 12) Online readings: Appendix D, E & F

Descriptive Statistics

Placebo and Belief Effects: Optimal Design for Randomized Trials

Chapter 4. The Validity of Assessment- Based Interpretations

HOW TO INSTANTLY DESTROY NEGATIVE THOUGHTS

REGRESSION MODELLING IN PREDICTING MILK PRODUCTION DEPENDING ON DAIRY BOVINE LIVESTOCK

Risk Aversion in Games of Chance

Teacher In-Service: Interpreters in the Classroom

m 11 m.1 > m 12 m.2 risk for smokers risk for nonsmokers

Oak Meadow Autonomy Survey

SUMMER 2011 RE-EXAM PSYF11STAT - STATISTIK

THE DEATH OF CANCER BY HAROLD W MANNER, STEVEN J. DISANTI, THOMAS L. MICHALSEN

Psychology of Perception Psychology 4165, Spring 2003 Laboratory 1 Weight Discrimination

THERAPEUTIC REASONING

Confidence Intervals. Chapter 10

CHAPTER ONE CORRELATION

Two-sample Categorical data: Measuring association

1.4 - Linear Regression and MS Excel

Mentoring. Awards. Debbie Thie Mentor Chair Person Serena Dr. Largo, FL

The Power of Feedback

CHAPTER 3 RESEARCH METHODOLOGY

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Empowered by Psychometrics The Fundamentals of Psychometrics. Jim Wollack University of Wisconsin Madison

11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES

Review and Wrap-up! ESP 178 Applied Research Methods Calvin Thigpen 3/14/17 Adapted from presentation by Prof. Susan Handy

CHAPTER 3 METHOD AND PROCEDURE

Calculating CAHPS Hospice Survey Top-Box Scores

Chapter 3 CORRELATION AND REGRESSION

How Does Analysis of Competing Hypotheses (ACH) Improve Intelligence Analysis?

CHAPTER TWO REGRESSION

IAPT: Regression. Regression analyses

Errors and Corrections. to Behavioral Risk Management. by Hersh Shefrin

Week 8 Hour 1: More on polynomial fits. The AIC. Hour 2: Dummy Variables what are they? An NHL Example. Hour 3: Interactions. The stepwise method.

Ingredients of Difficult Conversations

Section 3.2 Least-Squares Regression

Overview of the Logic and Language of Psychology Research

Chapter 3: Examining Relationships

Provided for non-commercial research and educational use only. Not for reproduction, distribution or commercial use.

Addendum: Multiple Regression Analysis (DRAFT 8/2/07)

Chapter 12. The One- Sample

d =.20 which means females earn 2/10 a standard deviation more than males

Chapter 1 Introduction to I/O Psychology

Bottom-Up Model of Strategy Selection

Attitude Measurement

From Causal Models to Sound Heuristic Inference

How Different Choice Strategies Can Affect the Risk Elicitation Process

Critical Thinking Assessment at MCC. How are we doing?

I. Introduction and Data Collection B. Sampling. 1. Bias. In this section Bias Random Sampling Sampling Error

Artificial Neural Networks and Near Infrared Spectroscopy - A case study on protein content in whole wheat grain

about Eat Stop Eat is that there is the equivalent of two days a week where you don t have to worry about what you eat.

Statisticians deal with groups of numbers. They often find it helpful to use

Table S1: Perceived Devaluation Discrimination Scale: Item Wording, Frequency Distributions, Item and Scale Statistics 1. Strongly Agree (3) % (N) (7)

Chapter 14: More Powerful Statistical Methods

Use the above variables and any you might need to construct to specify the MODEL A/C comparisons you would use to ask the following questions.

Journal of Political Economy, Vol. 93, No. 2 (Apr., 1985)

Goodness of Pattern and Pattern Uncertainty 1

Lesson 11 Correlations

Transcription:

John Miyamoto Email: jmiyamot@uw.edu October 3, 2017 File = D:\P466\hnd02-1.p466.a17.docm 1 http://faculty.washington.edu/jmiyamot/p466/p466-set.htm Psych 466: Judgment and Decision Making Autumn 2017 The Lens Model and Linear Models of Judgment The lens model is a conceptualization proposed by Egon Brunswik of the relationship between the environment and the perceiving, cognizing organism. Brunswik emphasized the idea that typically we do not experience objects and qualities directly from an environment - rather we infer the properties of objects from cues that are more directly available to us. In this handout, I will call the person who is making judgments about the environment, the judge. The figure to the right (Figure 3.1 from Hastie and Dawes) is the standard depiction of Brunswik's lens model. The "to-be-judged criterion" on the left is a state of the world that the judge is trying to evaluate. The "judgment" on the right is the judge's evaluation of this state. It is assumed that the judge cannot directly measure or evaluate the criterion; it might be the case that this is impossible in principle or simply difficult in practice. For example, if I meet someone and I'm trying to figure out whether this person supports or opposes gay marriage, the criterion is the true degree to which this person supports or opposes gay marriage, the cues would be any information that I can glean from this persons appearance, conversation, and background. The judgment would be my inference (guess) as to the degree to which this person supports or opposes gay marriage. In general, the cues are pieces of information that the judge is able to observe that are associated with the true state of the criterion. The ecological validities are the validities of the cues, i.e., the conditional probability that the object belongs to a category given the presence of the cue. Cue utilization is the weight that the judge gives to a particular cue. Table 1 Figure 3.1 in Hastie & Dawes. Lens model conceptual framework for the global judgment process. Table 1 below illustrates this terminology with three examples of judgment problems. Terminology Example 1 Example 2 Example 3 Criterion Age of man on street Future college GPA of high school student Future satisfaction of living with a roommate Cues Facial appearance hair color skin quality body motion mode of dress High school GPA SAT scores recommendations essay Social style when interviewed Reputation of person among your friends Self-reported habits of cleanliness, neatness, cooperativeness, etc. Judgment Estimate (guess) of man's age Estimate (guess) of students future college GPA Estimate of future satisfaction of living with this roommate

October 3, 2017 File = D:\P466\hnd02-1.p466.a17.docm 2 Examples 1, 2 and 3 are all examples of probabilistic judgment problems. In a probabilistic judgment problem, a judge has to make a judgment about a criterion based on uncertain cues. The lens model is a way to conceptualize the situation in a probabilistic judgment problem. Judgment Problem: Predicting College GPA from Information About High School Applicants to College The remainder of this handout has to do with two questions: Question 1: Suppose we are psychologists who are trying to help the judge make good decision. What kind of decision procedure should we suggest? Question 2: Are the decision procedures that we suggest better or worse than intuitive judgments made by the judge? * Intuitive judgments = holistic judgments = global judgments: In the intuitive judgment, the judge may take all of the different cues into consideration, but the judgment is based on a guess or feeling about what these cues mean; the judgment is not based the calculation of an explicit statistical formula. There are actually several slightly different decision procedures that are worth considering. All of these procedures can be illustrated with the example from Baron's chapter on Quantitative Judgment. The data for this example are shown in Table 2 below. Table 2 (combines the data from Tables 20.2 and 20.4 of Baron's chapter on Quantitative Judgment) 1 2 3 4 5 6 7 8 9 subj COL SAT REC ESS GPA PRE ERROR JUD 1 3.8 1500 4 4 4.0 3.910 0.110 4.0 2 3.6 1310 4 3 3.6 2.902-0.698 3.1 3 3.5 1300 5 3 3.9 3.560 0.060 3.8 4 3.2 1280 3 5 3.7 3.428 0.228 3.4 5 3.0 1260 4 4 3.5 2.921-0.079 3.0 6 2.8 1210 3 4 3.4 2.631-0.169 2.7 7 2.5 1320 5 3 3.5 2.807 0.307 3.0 8 2.2 1220 4 3 3.2 2.129-0.071 2.3 9 2.0 1200 2 5 3.0 1.997-0.003 1.9 10 1.5 1170 3 2 3.2 1.811 0.311 2.1 In this example, the judge is trying to predict the college GPA for high school students who have applied to his or her college. The researcher has the data shown in Table 2 available to her. The variables in this analysis are: COL COL is the college GPA of the student. COL is the criterion.. The variables in columns 3-6 are the cues (Baron calls them predictor variables). SAT REC SAT score (standardized education test for high school students). This is a rating of the recommendations on a 5-point scale. The strongest possible recommendation would get a 5. The weakest possible recommendation would get a 1.

October 3, 2017 File = D:\P466\hnd02-1.p466.a17.docm 3 ESS GPA The judge has to make the ratings of the recommendations based on his subjective evaluation of the strength of the recommendation. This is a rating of the student's essay on a 5-point scale. The best quality essay would get a 5. The worst quality essay would get a 1. Again, the judge has to make the ratings of the essays based on his subjective evaluation of their quality. GPA = the high school GPA of the student. The next two variables are computed by the statistical formula (multiple regression equation). PRE ERROR This the predicted college GPA based on a statistical formula (multiple regression). Only the variables, COL, SAT, REC, ESS and GPA are used to compute the formula - the judge plays no role in producing the formula other than to give the ratings for REC and ESS. This is the deviation of PRE from the actual value of COL, i.e., ERROR = COL PRE. The last variable is the judged global rating for each case (each student who is applying to college). JUD JUD is the judges intuitive prediction of the student's college GPA after seeing the cues, SAT, REC, ESS and GPA. When the judge makes this prediction, he does not get to see COL until after he has made his prediction. It may seem strange that the values of COL are shown in Table 2 because the judge is trying to estimate the value of COL based on the cue variables. If the judge already knows the values of COL, why is he trying to estimate the values of COL? The answer is that in some cases we may know the values of COL (for example, for students have already been admitted to college) whereas in other case, we may not know the values of COL (for example, prior to admitting the students). We can use information from the known cases to help us predict cases for which COL is not known. Four Methods for Predicting College GPA Method 1: Method 2: Method 3: Method 4: Compute a multiple regression model of the criterion. Compute a multiple regression model of the judge (MUD) Don't compute a model. Just use importance weights (subjective weights that are assigned by the judge). Don't compute a model. Just use unit weights. These four cases are explained below: Method 1: Compute a multiple regression model of the criterion. Characteristics of the situation to which this procedure applies: The judge has available to him data like that shown in Table 2. These data include the values of the criterion as well as the values of the cues. These data can be used to fit a multiple regression model. The judge wants to predict COL for a set of high school students. For thise students, he has the data for the cues (SAT, REC, ESS, GPA) available to him, but COL is not available to him because the students have not yet been admitted.

October 3, 2017 File = D:\P466\hnd02-1.p466.a17.docm 4 Procedure: Use the statistical procedure called multiple regression to compute a formula that predicts COL based on the cues, SAT, REC, ESS, and GPA. The input to this procedure are the values of these variables (including COL) shown in Table 2. The output of this procedure is the following formula: PRE = 0.000175 SAT + 0.092 REC + 0.217 ESS + 1.893 GPA 5.161 (1) Equation (1) is referred to as a linear prediction equation for the criterion, COL. The numbers, 0.000175, 0.092,..., 1.893 are called the regression coefficients or the regression weights. The last term, 5.161, is the intercept term. A statistical procedure called multiple regression takes the data in Table 2 to compute the regression weights and the intercept term. To get the predicted value for any particular case in Table 2, we plug in the values of the cue variables to get the prediction. For example, to get the predicted value for Student 1 in Table 2, we compute PRE(Student 1) = 0.000175 (1500) + 0.092 (4.0) + 0.217 (4.0) + 1.893 (4.0) 5.161 = 3.910 Of course, this prediction is not so important for Student 1 because we already know the true value of his or her COL (COL for Student 1 is 3.8). But the prediction equation is useful when we get an application from a new student for whom we don't know COL. For example, if we get an application from a student whose predictor variables (cues) have the values: SAT = 1200, REC = 3.7, ESS = 3.9, GPA = 3.2, then the prediction equation tells us that our best prediction for this student is: PRE(new student) = 0.000175 (1200) + 0.092 (3.7) + 0.217 (3.9) + 1.893 (3.2) 5.161 = 2.2933. SELECTION PROBLEM. If your goal is to pick the K students who are predicted to have the highest college GPA's: Choose the K new students who have the highest values of PRE, e.g., if you want to choose the 50 new students who have the highest predicted GPA's, compute PRE for all of the applicants and select the applicants with the 50 highest values of PRE. PREDICTION. If the goal is to predict the college GPA for a set of applicants: Use Equation (1) to compute PRE for these applicants - these are the predicted values of college GPA. Comparison of procedure for Method 1 to human judgment: Studies have shown that when large amounts of data for the criterion are available, the predictions for new cases derived from a regression model are much more accurate than are the intuitive judgments of a judge. Gigerenzer and Brighton (2009) point out that this may not be true if the available data is not large. Notice that the statistical formula shown in Equation (1) is actually very simple - each cue is multipled by a weight and then these products are added together to produce the prediction. Judges often think that they produce their judgments by means of a complex, interactive, nonlinear evaluation of the cues. Whether or not this is really true, the results of these studies show that a much simpler combination of the cues can produce better predictions than the processes used by human judges.

October 3, 2017 File = D:\P466\hnd02-1.p466.a17.docm 5 Method 2: Compute a multiple regression model of the judge (MUD) Characteristics of the situation to which this procedure applies: The judge wants to predict COL for a set of high school students. For thise students, he has the data for the cues (SAT, REC, ESS, GPA) available to him, but COL is not available to him because the students have not yet been admitted. For example, suppose that the judge has available the data for the cues shown in Table 2 (columns 3-6: SAT, REC, ESS, GPA) but not the data for the other columns in the table. Procedure: We show the judge the data for the cues in Table 2. We ask the judge to make an intuitive prediction of the college GPA for each case. Column 9 labeled "JUD" shows the intuitive predictions that were made by the judge. Now we use multiple regression to compute a formula that predicts JUD based on the cues, SAT, REC, ESS, and GPA. The input to this procedure are the values of the variables, JUD, SAT, REC, ESS, and GPA (notice that COL is not input to the procedure because we don't know it yet). The output of this procedure is the following formula: MUD = 0 SAT + 0.1 REC + 0.1 ESS + 2.0 GPA 4.8 (2) MUD is a Model of the judge 1. Finally, we use Equation (2) to predict the college GPA for any new students for whom we know SAT, REC, ESS and GPA, but not the college GPA (COL). For example, if we get an application from a student whose predictor variables (cues) have the values: SAT = 1200, REC = 3.7, ESS = 3.9, GPA = 3.2, then Equation (2) tells us that our prediction for this student is: MUD(new student) = 0 (1200) + 0.1 (3.7) + 0.1 (3.9) + 2.0 (3.2) 4.8 = 2.36 SELECTION PROBLEM. If your goal is to pick the K students who are predicted to have the highest college GPA's: Choose the K new students who have the highest values of MUD, e.g., if you want to choose the 50 new students who have the highest predicted GPA's, use Equation (2) to compute MUD for all of the applicants and select the applicants with the 50 highest values of MUD. PREDICTION PROBLEM. If the goal is to predict the college GPA for a set of applicants: Use Equation (2) to compute MUD for these applicants - these are the predicted values of college GPA. Comparison of procedure for Method 2 to human judgment: Studies have shown that when data for the criterion become available, e.g., after admitting the students, the model of the judge produces more accurate predictions than the intuitive judgments of the judge. Note that this result is superficially rather odd - the regression coefficients in Equation 2 were computed to predict JUD, the judge's predictions. Nevertheless it has been found that the model of the judge, i.e., Equation (2), does a good job predicting the criterion, COL. The results for this case have several implications: 1 Baron came up with this notation - I'm not a big fan of it, but I'm using it so that the Baron chapter will look familiar to you.

October 3, 2017 File = D:\P466\hnd02-1.p466.a17.docm 6 a) Although judges often think that they produce their judgments by means of a complex, interactive, nonlinear evaluation of the cues, the results for this case show that a much simpler combination of the cues can produce better predictions than the processes used by human judges. (Same conclusion as for Method 1). b) We don't even need to know the value of the criterion (COL) in order to find a statistical formula (prediction equation) that can outperform the judge. c) The statistical formula that we found in Equation (2) is a simple additive model of the judge's intuitive policy. Studies show that a simple additive model of the judge's policy will typically outperform the judge (even for judge's who are experts in their field). The reason for this is that the statistical formula is consistent - it treats each case by the same formula. A human judge has all sorts of random variations in his or her judgment. These random variations simply increase the inaccuracy (error) of the judge's predictions. Method 3: Don't use multiple regression to compute a model. Just use importance weights in the prediction equation. (a.k.a. MAUT Method or SMART Method). Importance weights are positive or negative numbers that reflect how strongly a particular cue influences the prediction. The decision maker simply makes up these numbers to reflect his or her judgment of the importance of an attribute. Characteristics of the situation to which this procedure applies: (Exactly the same as for Method 2) Procedure: Convert the values of the cues to z-scores. * Compute the mean of the SAT scores for all students in the sample (m SAT ); compute the standard deviation of SAT scores for all students in the sample (sd SAT ). If SAT 1 is the SAT score for student 1, then the SAT z-score for student 1 is z SAT,1 = (SAT 1 m SAT )/sd SAT. * Apply a similar procedure to convert REC, ESS and GPA to z-scores. Based on your judgment, choose "importance weights" for the dimensions. * Example: Consider the cues, SAT, REC, ESS, and GPA. Suppose you give the SAT an importance weight of +1. If you think that the REC is slightly less important than SAT, you could give it a weight, e.g., of +0.7. If you think that ESS is about half as important as SAT, you would give it a weight of 0.5. If you think that the GPA is twice as important as SAT, give it a weight of +2. Convert the importance weights to normalized weights, i.e., replace the weight w k with wk wi. * Example: Suppose that the judged importance weights are +1 (SAT), +0.7 (REC), +0.5 (ESS) and +2 (GPA). Then wi = 1 + 0.7 + 0.5 + 2 = 4.2. Then the normalized weights are: +1/4.2 = 0.24 (SAT), +0.7/4.2 = 0.17 (REC), i +0.5/4.2 = 0.12 (ESS) and 2/4.2 = 0.47 (GPA). Use the normalized weights to compute the predicted z-scores for each student according to the formula: Predicted Z = 0.24 SAT + 0.17 REC + 0.12 ESS + 0.47 GPA. (3) For example, suppose that we have an application from a student whose predictor variables (cues) have the z-scores: z SAT = 1.3, z REC = 0.60, z ESS = 0.42, and z GPA = 0.80. Then Equation (3) tells us that the predicted z-score for the college GPA of this student is: Predicted Z(new student) = 0.24 (1.3) + 0.17 (0.60) + 0.12 ( 0.42) + 0.47 (0.80) = 0.74. i

October 3, 2017 File = D:\P466\hnd02-1.p466.a17.docm 7 SELECTION PROBLEM. If your goal is to pick the K students who are predicted to have the highest college GPA's: Use Equation (3) to compute the predicted z-scores for college GPA's of all of the applicants. Then, choose the K new students who have the highest values of Predicted Z. PREDICTION PROBLEM. If the goal is to predict the college GPA for a set of applicants: The solution is described in Appendix A. In Psych 466, it is not necessary to work out the mathematical details, although they are fairly simple. Comparison of the procedure for Method 3 to human judgment: Just as in the results for Methods 1 and 2, studies have shown that when data for the criterion become available, e.g., after admitting the students, the importance weighting model produces more accurate predictions than the intuitive judgments of the judge. The results for this case have several implications: a) Same as implication (a) for Method 2. b) Same as implication (b) for Method 2. c) We don't even need to do a complicated calculation of the regression weights in order to find a model that can outperform the judge. All we need to do is guess the relative importance of the various cues. d) An importance weighting model is an extremely simple additive model of the judge's intuitive policy. It will typically outperform the intuitive judgment (even for judge's who are experts in their field). Again, the reason for this is that the statistical formula is consistent - it treats each case by the same formula. A human judge has all sorts of random variations in his or her judgment. These random variations simply increase the inaccuracy (error) of the judge's predictions. Method 4: Don't compute a model. Just use unit weight. (Unit Weighting Model) Weights = +1 for all positive variables, and 1 for all negative variables. Characteristics of the situation to which this procedure applies: (Exactly the same as for Method 2) Procedure: Convert the values of the cues to z-scores. Look at the cues, SAT, REC, ESS, and GPA. If we know that the cue is positively related to college GPA, we give the cue a weight of +1.0; If we know that the cue is negatively related to college GPA, we give the cue a weight of 1.0. Convert the weights to normalized weights, i.e., replace the weight w k with wk wi. * Example: SAT, REC, ESS and GPA are all positive related to college GPA (COL), therefore they all have a unit weight of +1.0. The sum of the absolute values of the weights is i wi = 1 + 1 + 1 + 1 = 4. Therefore the normalized weights are 0.25, 0.25, 0.25, and 0.25. Use the normalized weights to compute the predicted z-scores for each student according to the formula: Predicted Z = 0.25 SAT + 0.25 REC + 0.25 ESS + 0.25 GPA. (4) For example, suppose that we have an application from a student whose predictor variables (cues) have the z-scores: z SAT = 1.3, z REC = 0.60, z ESS = 0.42, and z GPA = 0.80. Then Equation (4) tells us that the predicted z-score for the college GPA of this student is: Predicted Z(new student) = 0.25 (1.3) + 0.25 (0.60) + 0.25 ( 0.42) + 0.25 (0.80) = 0.57. i

October 3, 2017 File = D:\P466\hnd02-1.p466.a17.docm 8 SELECTION PROBLEM. If your goal is to pick the K students who are predicted to have the highest college GPA's: Use Equation (4) to compute the predicted z-scores for college GPA's of all of the applicants. Then, choose the K new students who have the highest values of Predicted Z. PREDICTION PROBLEM. If the goal is to predict the college GPA for a set of applicants: The solution is described in Appendix B. In Psych 466, it is not necessary to work out the mathematical details, although they are fairly simple. Terminology: Equation (4) is called a unit-weighting regression model because, prior to normalizing the weights, all of the weights are equal to +1.0 or 1.0. Equation (1) is called a proper linear model because it is the equation with the best-fitting regression weights for the criterion. We can prove that Equation (1) will have more accurate predictions than any other linear regression model if the model is fit to a large set of data. Equation (2) can also be called a proper linear model because it is the equation with the best-fitting regression weights for the judge's intuitive predictions. Equation (1) is a proper linear model for the criterion; Equation (2) is a proper linear model for the judge's predictions. A prediction equation with weights that are different from the best fitting regression weights is called an improper linear model. Equations (3) and (4) are examples of improper linear models. Comparison of the procedure for Method 3 to human judgment: Just as in the results for Methods 1 and 2, studies have shown that when data for the criterion become available, e.g., after admitting the students, the unit weighting model produces more accurate predictions than the intuitive judgments of the judge. The results for this case have several implications: a) Same as implication (a) for Method 2. b) Same as implication (b) for Method 2. c) We don't even need to do a complicated calculation of the regression weights in order to find a model that can outperform the judge. All we need to know is which cues are positively related to the criterion and which are negatively related to the criterion. Positive cues get a weight of +1.0, and negative cues get a weight of 1.0. d) A unit-weighting regression model is an extremely simple additive model of the judge's intuitive policy. Studies show that a unit-weighting model of the judge's policy will typically outperform the intuitive judgments of a judge (even for judge's who are experts in their field). Again, the reason for this is that the statistical formula is consistent - it treats each case by the same formula. A human judge has all sorts of random variations in his or her judgment. These random variations simply increase the inaccuracy (error) of the judge's predictions. Appendix A. If we are using Method 3 and the goal is to predict the college GPA for a set of applicants: Use Equation (3) to compute the predicted z-scores for college GPA's of all of the applicants. Now you have to convert the predicted z-scores back to the GPA scale (ranging from 0 to 4.0). To do this, you need to know, or be willing to guess, the mean college GPA and standard deviation of college GPA for all students who are enrolled at the college. Let M COL stand for the mean college GPA and let SD COL stand for the standard deviation of the college GPA. Now compute the predicted college GPA by the equation: Predicted College GPA = (Predicted Z-score) SD COL + M COL (5) For example, if you think that the mean college GPA, M COL, is equal to 2.9 and the standard deviation of college GPA is 0.5, then a student with a predicted z-score of +0.3 has a predicted college GPA of

October 3, 2017 File = D:\P466\hnd02-1.p466.a17.docm 9 (0.3) 0.5 + 2.9 = 3.05. Similarly, a student with a predicted z-score of 0.4 has a predicted college GPA of ( 0.4) 0.5 + 2.9 = 2.7. Appendix B. If we are using Method 4 and the goal is to predict the college GPA for a set of applicants: Use Equation (4) to compute the predicted z-scores for college GPA's of all of the applicants. Now you have to convert the predicted z-scores back to the GPA scale (ranging from 0 to 4.0). To do this, use the method that is described in Appendix A (the only difference between Method 3 and Method 4 is that the predicted z-scores will be different). Table 3: Summary of the Pros and Cons of Methods 1-4 Possible Requirements Intuitive Judgment Multiple Regression Model Model of the Judge (MUD) Importance Weighting Model Unit Weighting Model You need a list of predictor variables, e.g., SAT, REC, ESS, and GPA. You need to know the value of the criterion for a list of cases, e.g., the value of COL. You need to have the judge make predictions for a list of cases. You need to decide what is the relative importance of the predictor variables You need to convert the predictor variables to z- scores You need to estimate or guess the mean and standard deviation of the criterion in the general population. c Rank order of effort needed to use this method: 1 = least effort, 5 = most effort NO YES YES YES YES NO YES NO NO NO NO NO YES NO NO NO NO NO YES NO NO NO NO YES a YES a NO NO NO YES d YES d 1 5 4 3 2 Rank order of accuracy: 1 = best, 5 = worst a b c This is easy to do with a computer. 5 1 2 3 b 4 b Importance weighting can be less accurate than unit weighting if the decision maker doesn't think carefully about how to choose importance weights. We don't go into this issue in this class. For the importance weighting and the unit weighting models, you need to estimate the mean and variance of the criterion only if you want to produce a quantitative prediction of the criterion. You need the estimated mean and variance of the criterion in

October 3, 2017 File = D:\P466\hnd02-1.p466.a17.docm 10 d order to convert predicted z-scores on the criterion back to the original scale on the criterion. If you only want to rank the different cases, i.e., determine which is best, which is 2 nd best, which is 3 rd best, etc., then you do not need to estimate the mean and variance of the criterion. Practically speaking, this is not hard to do. References Brunswik, E. (1952). The conceptual framework of psychology. Chicago, University of Chicago Press. Brunswik, E., Hammond, K. R., & Stewart, T. R. (Eds.) (2000). The essential Brunswik: Beginnings, explications, and applications. New York: Oxford University Press. * See, also, the Brunswik Society website: http://www.brunswik.org/. Baron (1994), ch 20 (Quantitative Judgment). Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgment. Science, 243, 1668-1674. Gigerenzer, G., & Brighton, H. (2009). Homo Heuristicus: Why biased minds make better inferences. Hastie, R., & Dawes, R. M. (2009). Rational choice in an uncertain world (2nd ed.). Thousand Oaks, CA: Sage Publications, Inc. Chapter 3: A general framework for judgment. **End of File **