QUALITY IN STRATEGIC COST ADVICE: THE EFFECT OF ANCHORING AND ADJUSTMENT

Similar documents
STRATEGIC COST ADVICE AND THE EFFECT OF THE AVAILABILITY HEURISTIC C J FORTUNE

Strategic Decision Making. Steven R. Van Hook, PhD

FAQ: Heuristics, Biases, and Alternatives

References. Christos A. Ioannou 2/37

Perception Search Evaluation Choice

Examples of Feedback Comments: How to use them to improve your report writing. Example 1: Compare and contrast

An Understanding of Role of Heuristic on Investment Decisions

ROLE OF HEURISTICS IN RISK MANAGEMENT

Unit 1 Exploring and Understanding Data

Wendy J. Reece (INEEL) Leroy J. Matthews (ISU) Linda K. Burggraff (ISU)

Choice set options affect the valuation of risky prospects

Behavioural models. Marcus Bendtsen Department of Computer and Information Science (IDA) Division for Database and Information Techniques (ADIT)

Analysis of Confidence Rating Pilot Data: Executive Summary for the UKCAT Board

The evolution of optimism: A multi-agent based model of adaptive bias in human judgement

First Problem Set: Answers, Discussion and Background

Behavioral Game Theory

Magnitude and accuracy differences between judgements of remembering and forgetting

Funnelling Used to describe a process of narrowing down of focus within a literature review. So, the writer begins with a broad discussion providing b

Chapter 5: Field experimental designs in agriculture

Chapter 1: Exploring Data

Kevin Burns. Introduction. Foundation

Readings: Textbook readings: OpenStax - Chapters 1 11 Online readings: Appendix D, E & F Plous Chapters 10, 11, 12 and 14

The Good Judgment Project: A Large Scale Test of Different Methods of Combining Expert Predictions

Chapter 2--Norms and Basic Statistics for Testing

A Computational Model of Counterfactual Thinking: The Temporal Order Effect

Experimental Psychology

CHAPTER 3 METHOD AND PROCEDURE

Pooling Subjective Confidence Intervals

Information and cue-priming effects on tip-of-the-tongue states

Generalization and Theory-Building in Software Engineering Research

CHANGES IN THE OVERCONFIDENCE EFFECT AFTER DEBIASING. Presented to. the Division of Psychology and Special Education EMPORIA STATE UNIVERSITY

One-Way ANOVAs t-test two statistically significant Type I error alpha null hypothesis dependant variable Independent variable three levels;

Bargain Chinos: The influence of cognitive biases in assessing written work

Statistics is the science of collecting, organizing, presenting, analyzing, and interpreting data to assist in making effective decisions

Psychological Factors Influencing People s Reactions to Risk Information. Katherine A. McComas, Ph.D. University of Maryland

Lesson 8 Descriptive Statistics: Measures of Central Tendency and Dispersion

Chapter 2. Behavioral Variability and Research

THINKING, FAST AND SLOW by Daniel Kahneman

Statistics is the science of collecting, organizing, presenting, analyzing, and interpreting data to assist in making effective decisions

Psychological. Influences on Personal Probability. Chapter 17. Copyright 2005 Brooks/Cole, a division of Thomson Learning, Inc.

DEMOGRAPHICS AND INVESTOR BIASES AT THE NAIROBI SECURITIES EXCHANGE, KENYA

Quantitative Methods in Computing Education Research (A brief overview tips and techniques)

Clinical Epidemiology for the uninitiated

Students will understand the definition of mean, median, mode and standard deviation and be able to calculate these functions with given set of

Chapter 3: Layout. Cognition, heuristics and biases. Introduction Section 3.1. Definitions Section 3.2. Exploring Biases Section 3.

Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN

Cognitive Heuristics and AIDS Risk Assessment Among Physlclans 1

How do we identify a good healthcare provider? - Patient Characteristics - Clinical Expertise - Current best research evidence

Probability judgement from samples: accurate estimates and the conjunction fallacy

Chapter 02 Developing and Evaluating Theories of Behavior

Paradoxes and Violations of Normative Decision Theory. Jay Simon Defense Resources Management Institute, Naval Postgraduate School

#28 A Critique of Heads I Win, Tails It s Chance: The Illusion of Control as a Function of

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Representativeness Heuristic and Conjunction Errors. Risk Attitude and Framing Effects

Never P alone: The value of estimates and confidence intervals

Student Performance Q&A:

Chapter 2 Norms and Basic Statistics for Testing MULTIPLE CHOICE

How to interpret results of metaanalysis

Well I Never! Risk Perception & Communication

Exploring the reference point in prospect theory

UNESCO EOLSS. This article deals with risk-defusing behavior. It is argued that this forms a central part in decision processes.

Definition of Scientific Research RESEARCH METHODOLOGY CHAPTER 2 SCIENTIFIC INVESTIGATION. The Hallmarks of Scientific Research

The Conference That Counts! March, 2018

Psychology 2019 v1.3. IA2 high-level annotated sample response. Student experiment (20%) August Assessment objectives

Does the Selection of Sample Influence the Statistical Output?

were selected at random, the probability that it is white or black would be 2 3.

Structural Approach to Bias in Meta-analyses

The role of theory in construction management: a call for debate

INSENSITIVITY TO PRIOR PROBABILITY BIAS IN OPERATIONS MANAGEMENT CONTEXT

FORM C Dr. Sanocki, PSY 3204 EXAM 1 NAME

CHAPTER 4 RESULTS. In this chapter the results of the empirical research are reported and discussed in the following order:

Intuitive Confidence Reflects Speed of Initial Responses in Point Spread Predictions. Alexander C. Walker. A thesis

Chapter 19. Confidence Intervals for Proportions. Copyright 2010 Pearson Education, Inc.

ORGANISATIONAL BEHAVIOUR

Abstract Title Page Not included in page count. Authors and Affiliations: Joe McIntyre, Harvard Graduate School of Education

Item Writing Guide for the National Board for Certification of Hospice and Palliative Nurses

Communication Research Practice Questions

ASSOCIATION FOR CONSUMER RESEARCH

Results & Statistics: Description and Correlation. I. Scales of Measurement A Review

Risk attitude in decision making: A clash of three approaches

On the provenance of judgments of conditional probability

Checking the counterarguments confirms that publication bias contaminated studies relating social class and unethical behavior

COGNITIVE BIAS IN PROFESSIONAL JUDGMENT

A Preliminary Study of Sequence Effects in Judgment-based Software Development Work- Effort Estimation

Appendix G: Methodology checklist: the QUADAS tool for studies of diagnostic test accuracy 1

C/S/E/L :2008. On Analytical Rigor A Study of How Professional Intelligence Analysts Assess Rigor. innovations at the intersection of people,

Chapter 7: Descriptive Statistics

Test 1C AP Statistics Name:

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

CANTAB Test descriptions by function

Human-Computer Interaction IS4300. I6 Swing Layout Managers due now

John Evans 3/4 October 2013 Workshop on Methods for Research Synthesis Cambridge, MA

Probabilities and Research. Statistics

THE EFFECT OF EXPECTATIONS ON VISUAL INSPECTION PERFORMANCE

Bayes Theorem Application: Estimating Outcomes in Terms of Probability

ANOVA in SPSS (Practical)

CHAPTER 3 DATA ANALYSIS: DESCRIBING DATA

Learning with Rare Cases and Small Disjuncts

Project exam in Cognitive Psychology PSY1002. Autumn Course responsible: Kjellrun Englund

UNCORRECTED PAGE PROOFS

Transcription:

QUALITY IN STRATEGIC COST ADVICE: THE EFFECT OF ANCHORING AND ADJUSTMENT Chris Fortune 1 and Mel Lees 2 1 School of the Built Environment, Liverpool John Moores University, Clarence Street, Liverpool L3 5UG, UK 2 Research Centre for the Built and Human Environment, University of Salford, Bridgewater Building, Salford, M7 9NU, UK The formulation of early stage building project cost advice for clients requires the professionals concerned to exercise judgement. The exercise of judgement is a human cognitive process that can be subject to errors, bias and heuristics. One of the biases that affects judgement is termed anchoring and adjustment. This study seeks to add to the literature related to judgement in early cost advice by ascertaining whether construction professionals are prone to make judgements that are biased by their reliance on the anchoring and adjustment heuristic. The paper reports the development of an appropriate measuring instrument and the results of its application to a group of thirty-four subjects. The subjects were a convenience sample drawn from a cohort of final stage part-time students in quantity surveying. Subjects were tested on their propensity to make biased judgements via word problems that were set in their own subject specific domain. The results of the work revealed that the subjects displayed the same level of error in response to the context based word problems as had been displayed in previous studies in which subjects responded to word problems set in non-work related contexts. The paper concludes by setting out the case for further empirical work in this area in order to address the impact of this and other biases on a wider sample of construction professionals. Keywords: adjustment, anchoring, bias, error, judgement. INTRODUCTION Fortune and Lees (1996) reported an empirical study that obtained a practitioner assessment of the relative performance of the building project cost modelling techniques actually used in practice. The extent to which a particular cost model relied upon judgement was found to be a factor that influenced model s incidence-ofuse. The results of that study showed that judgement could be considered as being either a positive or a negative influence on the quality of advice provided as perceived by the practitioner. The positive component is the need of the practitioner to adjudicate on the raw outcome of a process or a technique. The negative side is the nature of the human judgement and the propensity of individuals to make errors. Thus it can be seen that research designed to improve the quality of early stage strategic cost advice must also address the development of a better understanding of the role of human judgement in its formulation. Previous work has established that practitioners make errors due to anchoring and adjustment bias in arriving at judgements when confronted with problems unrelated to their industry context - see Fortune and Lees (1998). In addition the potential for humans with particular learning characteristics to make similar cognitive errors has also been investigated - see Fortune and Lees Fortune, C and Lees, M (1998) Quality in strategic cost advice: the effect of anchoring and adjustment. In: Hughes, W (Ed.), 14th Annual ARCOM Conference, 9-11 September 1998, University of Reading. Association of Researchers in Construction Management, Vol. 1, 227-35.

Fortune and Lees (1997). This study seeks to make a further contribution to the research in the field of the formulation of strategic cost advice by ascertaining whether construction professionals have a propensity to make errors of judgement, due to anchoring and adjustment bias, when the problems are set in an industry context. The paper firstly sets out the wider context for the study and then reports on the development and application of an appropriate measuring instrument to thirty-four subjects drawn from a cohort of final stage part-time undergraduate students in surveying. The results of the investigation are then analysed using Minitab for Windows (v10) and the paper concludes by outlining a plan for future action that will further contribute to the improvement of quality in decision-making in the field of strategic cost advice. CONTEXT Following a review of literature related to early stage building project price forecasting and judgement Raftery (1995) asserted that reliable strategic cost advice required the input of human judgement. However, both Raftery (1995) and Birnie (1995) pointed out that humans make mistakes when making judgements and they stated that more work was needed to understand the behavioral processes involved. Tversky and Kahneman (1974) and Kahneman, Slovic and Tversky (1985) provided a comprehensive review of the literature related to the existence of systematic biases that affect judgement. They asserted that in making judgements under uncertainty people in general do not appear to follow the calculus of chance or the statistical theory of prediction, instead they rely upon a number of simplifying strategies or heuristics that direct their judgements. Such heuristics can sometimes lead to reasonable judgements and sometimes lead to severe and systematic errors. The heuristics that were acknowledged as being generalisable across the population were (1) the availability heuristic, (2) the representative heuristic and (3) the anchoring and adjustment heuristic. The potential biases attributed to each of these heuristics were listed by Bazerman (1993) as being :- ease of recall, retrievability, presumed associations (the availability heuristic), insensitivity to base rates, insensitivity to sample size, misconceptions of chance, regression to the mean, the conjunction fallacy (the representativeness heuristic), insufficient anchor adjustment, conjunctive and disjunctive events, overconfidence (the anchoring and adjustment heuristic). Mak and Raftery (1992) acknowledged that the majority of the research in cognitive psychology has lead to a common understanding and acceptance of the existence of the above listed heuristics and biases in lay people thinking intuitively and making judgements about problems. However, they pointed out that there was as yet no concensus in the literature on bias. In particular they noted that there was little empirical evidence of the propensity for bias in judgements made by experts considering context related problems. One such study was carried out by Mak and Raftery (1992) in an experiment with quantity surveying students in a simulated real life price forecasting situation. Their conclusions indicated that there was little support for the existence of severe and systematic bias and that the previous research findings on the existence of generalised bias may have been too pessimistic when practitioners were asked to make judgements on matters within their own field. However, Mak and Raftery (1992) went on to point out that their work had methodological limitations and they 228

Quality in strategic cost advice suggested that further work with experienced practitioners and subjects from different institutions would be needed to validate their work. This study seeks to add to the empirical evidence so far collected on practitioners judgements on word problems set in their own subject related domain. The study has centred on the ascertainment of practitioners propensity to be affected by the anchoring and adjustment bias when making judgements on work related word problems METHODOLOGY General The central problem facing this study was the re-working of the measuring instrument used in the previous study (Fortune and Lees 1997) to introduce an industry context. The instrument takes the form of a series of problems that the subject is required to attempt. In the previous study these were taken from the literature sources set out above and were, therefore, not set in a construction context. Beach et al. (1987) criticised the approach of asking practitioners to solve problems that were not set in the context appropriate for their expertise. They argued that this would inevitably lead to evidence of error as the subjects did not apply themselves to the task in hand. The current study focuses on the anchoring and adjustment bias but the previous study had covered the four main sources of error - cognitive, availability, representative and anchoring. The test used in that study had contained twelve questions - three for each of the main types of error. Each question dealt with a particular sub-type of the main error. Originally, the test included 36 questions with each sub-type having three questions randomly spread throughout the test, but piloting suggested that this made the test too long as the subjects became disinterested and the results less valid. Since the current study was limited to one of the main error types - anchoring and adjustment - the number of questions for each sub-type could be increased. The three sub-types of the anchoring and adjustment bias together with the question numbers used in the test are set out in Table 1. The original test question on the conjunctive and disjunctive sub-type error was based on the selection of coloured marbles from a bag and had its roots in statistical probabilities. This question was set in the context of drawing coloured bricks from a pack containing bricks of two different colours. The statistical background to the questions were not affected by this change. Table 1: Test questions and sub-types of errors Anchoring and adjustment bias Test question number Sub-type of error insufficient anchor adjustment 1, 4, 7 conjunctive and disjunctive 2, 5, 8 overconfidence 3, 6, 9 For the insufficient anchor adjustment questions related to the salaries of various professions. Instead of using salaries the average building prices of three different building types were used. The information was taken from the Building Cost Information Service and was current at the time the test was administered. Overconfidence in the original test involved the subject in guessing the population of various countries and then providing an upper and lower limit that would be their 95% confidence limits for the real value. This question was changed so that the subject had 229

Fortune and Lees to provide several rates for items of work contained in a price book that was current in 1992. Again, the 95% confidence limits were required. By re-working the original test as described above the new test was developed with an industry context. For each of the sub-types of error three versions of the original question were developed and this meant that there were nine questions in total. Piloting The new test was piloted on a small group of practitioners to establish whether it could be understood and whether it was of an appropriate length. The results of the pilot indicated that the text of the test was appropriate and that the time required to undertake the test was short enough to retain the interest of the subjects. Given the limited resources available for the study it was decided to establish a convenience sample of thirty-four subjects drawn from the full cohort of part-time students attending final level surveying degree courses at the University of Salford. Previous involvement in the formulation of early construction cost advice was the criterion set for inclusion in the subject frame and the measuring instrument was applied in the spring of 1998. RESULTS The propensity for error test used questions developed in context as described above. In addition to responding to the questions the subjects were asked to indicate on a scale of 1 to 4 how confident they were that their response was correct (1 - not at all sure, 4 - very sure the answer is correct). Therefore, for each question not only was it possible to identify whether an error had been made (ie an incorrect answer), but it was also possible to express the degree of error by using the confidence response. For example, if a question required the respondent to indicate between two alternatives, A and B, one of the alternatives would be the correct answer. Answering incorrectly would indicate an error. But if the respondent was not sure about their response they could indicate a level of confidence of 1, if they were very confident then they could indicate a higher confidence score. An incorrect answer with low confidence is arguably not an error at all. The scoring system took this into account and was based on the confidence of the response minus one (ie a confidence level of 1 became 0, 2 became 1 and so on), which if the answer was incorrect, was expressed as a negative number. Therefore, the available scores for any question were +3, +2 and +1 for correct answers, ie no error; 0 were the confidence was low and it could not be assumed that an incorrect answer was indicative of an error; and -1, -2 and -3 for incorrect answers. For each sub-type of error there was a score for each of the three questions and these scores were averaged to produce an overall result for the sub-type for each subject. The results are set out in Table 2. 230

Quality in strategic cost advice Table 2: Results of error test (negative figures in brackets, blanks indicate incomplete responses) Test Insufficient anchor Conjunctive and Overconfidence subject adjustment disjunctive 1 (2.00) (1.00) 2 0.00 1.30 0.00 3 (2.50) (2.67) (1.00) 4 (2.00) (1.33) (0.67) 5 2.00 (1.67) (1.00) 6 (2.00) (2.33) (1.33) 7 (1.00) (1.33) (1.00) 8 (0.67) (1.67) 9 (1.50) (0.67) (1.00) 10 (1.00) 0.00 11 (1.50) (1.00) 12 0.00 0.00 0.00 13 (0.33) (1.00) (1.00) 14 (1.00) (0.33) 0.00 15 (1.33) (0.67) (2.67) 16 (1.00) 0.00 0.00 17 (0.33) (0.67) 18 (1.00) (1.67) 19 (1.00) (1.30) 20 (0.67) (0.67) 21 (1.50) 0.50 (2.00) 22 (1.00) 0.00 23 (2.00) (1.67) (0.50) 24 (1.00) (1.00) 25 0.00 26 (1.50) (1.33) (1.33) 27 (1.00) 0.00 0.00 28 (1.00) (1.33) (1.33) 29 (0.50) (2.00) 0.00 30 (1.50) 31 (1.33) (2.00) (0.67) 32 (1.00) (1.67) (0.50) 33 (3.00) (1.33) 0.00 34 1.00 (1.67) (0.33) ANALYSIS The analysis of the results was carried out using Minitab for Windows (v10). The data shown in Table 2 was used to construct a distribution for each sub-type of error. The distribution shows how the sample as a whole responded in terms of errors of judgement. The scale of +3 to -3 is a continuous scale moving from high certainty of correctness to high incidence of error. The are two possible null scenarios for the expected distribution. The first is that the test is not taken seriously by the subjects and that answers and confidences are generated at random. This would mean that each confidence level had the same chance of being indicated in any given response and would result in a horizontal flat line distribution curve. The second is that the subjects do not commit errors and would indicate a level of confidence of 1 to all questions they suspected they may have got wrong. Since all confidence levels of 1 were regraded to 0 this would result in a distribution that occurred on the positive side of the x-axis only. The actual results are shown in figs 1, 2 and 3. 231

Fortune and Lees Descriptive Statistics Anderson-Darling Normality Test A-Squared: p-value: 1.1191 0.0054-3 -2-1 0 1 2 Mean Std Dev Variance Skewness Kurtosis nofdata -1.0333 0.9363 0.8767 0.9088 2.0487 33.0000-1.45-1.35-1.25-1.15-1.05-0.95-0.85-0.75-0.65 Minimum 1st Quartile Median 3rd Quartile Maximum -1.3653 0.7530-3.0000-1.5000-1.0000-0.7000 2.0000-0.7013 95% Confidence Interval for Sigma 1.2384-1.4005-1.0000 Figure 1: Distribution of responses for insufficient anchor adjustment The questions about insufficient anchor adjustment required the respondent to estimate the average building prices in /m2 of gross floor area for one-off dwellings, offices and factories. In each case the respondent was given fictitious information about the most recent equivalent project from within their organisation. This information was positioned as an anchor in the highest or lowest 20% of the distribution of costs as taken from BCIS. An answer was deemed correct if it was within +/- 20% of the mean value given by BCIS, an error was made if the response was nearer the anchor than the mean value and a null was recorded if the answer was not correct but farther away from the anchor than the mean. The distribution is clearly centred around negative values. The mean is given as - 1.0333 and the 95% confidence limits for the mean are both negative at -1.3653 and - 0.7013. The standard deviation is 0.9363. The distribution indicates that, as a group, the subjects make systematic errors of judgement by failing to make sufficient adjustment to information given to them. This means that surveyors are likely to be influenced by the data that is given to them and work with that data rather than be truly objective. This makes them susceptible to questions being framed and likely, therefore, to making errors. 232

Quality in strategic cost advice Descriptive Statistics Anderson-Darling Normality Test A-Squared: p-value: 0.5593 0.1370-3 -2-1 0 1 Mean Std Dev Variance Skewness Kurtosis nofdata -0.9788 0.8738 0.7636 0.4136-0.2363 33.0000-1.3-1.2-1.1-1.0-0.9-0.8-0.7-0.6 Minimum 1st Quartile Median 3rd Quartile Maximum -1.2886 0.7027-2.7000-1.7000-1.0000-0.1500 1.3000-0.6689 95% Confidence Interval for Sigma 1.1558-1.3000-0.7000 Figure 2: Distribution of responses for conjunctive and disjunctive For the conjunctive and disjunctive fallacy the questions are based on probabilities. Coloured bricks are drawn from a stack containing two types of coloured brick. In each case the question provides three different scenarios, ie the stack contains different proportions of the two coloured bricks and the requirement for success also changes. An example of this would be...drawing a blue brick three times in a row from a stack containing 35% blue and 65% red bricks. The respondent is asked to place the three scenarios in order of likely success, ie which is most likely to succeed and so on. Placing the scenarios in the right order gives a correct response. Clearly, this question can be worked out using standard probability arithmetic, but the question tests whether the respondent, in failing to use such a rational approach, is swayed by one aspect of the data. In the three scenarios presented the one with the least probability has the highest proportion of bricks in the stack that are the same colour as the one being drawn. Therefore, if the subject opts not to perform the mental arithmetic the question is whether they will choose on the basis of the proportions in the stack. To do so would be an error based around the conjunction of the probability and the proportion of coloured bricks. The results for this question are similar to those for the insufficient anchor adjustment. The mean is negative at -0.9788 and the 95% confidence limits are - 1.2886 and -0.6689. The standard deviation is 0.8738. Again, the sample shows systematic errors of judgement. Here the problem is that decisions are being made upon data on the basis of an assumed association between that data and the right answer. Where that association does not hold true errors are being made. 233

Fortune and Lees Descriptive Statistics Anderson-Darling Normality Test A-Squared: p-value: 0.8433 0.0249-3 -2-1 0 Mean Std Dev Variance Skewness Kurtosis n of data -0.7409 0.7189 0.5168-0.8766 0.3548 22.0000-1.0-0.5 0.0 Minimum 1st Quartile Median 3rd Quartile Maximum -1.0597 0.5531-2.7000-1.0750-0.7000 0.0000 0.0000-0.4222 95% Confidence Interval for Sigma 1.0274-1.0000 0.0000 Figure 3: Distribution of responses for overconfidence The question on overconfidence requires the respondent to estimate five rates taken from a price book current in 1992. Once the rates have been estimated the subject is asked to give an upper and a lower limit such that they would have 95% confidence that the correct answer would lie between them. In each question there were five rates and provided that four out of the five were correct this would be taken as a correct response. Any other permutation was evidence of an error. The distribution for overconfidence is more negative than the previous two subtypes of error in that there were no correct responses. The mean was -0.7409 and 95% confidence limits being -1.0597 and -0.422. The standard deviation was 0.7189. There is clear evidence of systematic error. This is a potentially worrying finding in that it would indicate a lack of objectivity about potential errors in estimating which could lead to misinformed decision-making. CONCLUSIONS The previous discussion identified two null hypothesis scenarios - random distribution and positive skewing. The analysis of the results shows that neither of these holds true and therefore, they can be disregarded. The results show clear evidence of systematic errors of an anchoring and adjustment type. These errors have implications for the quality of strategic cost advice given to clients and, therefore, the quality of decisions made by clients when considering construction projects. Errors of insufficient anchor adjustment will result in advice being skewed away from an objective assessment of cost and towards existing known data that is, or may be, irrelevant to the assessment. In the conjunctive and disjunctive error situation the consultant is demonstrating a reliance upon the link between two events or facts that in reality is not true. Introducing a logic flaw into the process of formulating advice 234

Quality in strategic cost advice will lead to inaccurate calculations and advice that may be significantly in error. With the overconfidence error the practitioners are demonstrating a level of confidence in their estimates that is inconsistent with the facts. This error will lead to incorrect indications of the probability of the accuracy of advice. The following are criticisms of the research; the sample size is small, the sample was a convenient sample and is not representative of the population of experts, the type of question used for overconfidence poses a problem in that it would appear that a significant number of subjects had difficulty responding or considered the question too taxing and therefore did not attempt it. This had been recognised in the previuos study and adjustments were made to reduce the difficulty attached to the question. Whilst the responses are better in the current study there is still room for improvement. The recommendation for future research is that the new context based propensity for error test should extended to include the other error types identified by Fortune and Lees (1997). REFERENCES Bazerman, H. (1993) Judgement in managerial decision making. 3ed., New York; Wiley. Beach, L.R., Christensen-Szalanski, J. and Barnes, V. (1987) Assessing human judgement: has it been done, can it be done, should it be done? in Judgemental Forecasting, ed G Wright and P Ayton, John Wiley & Sons, p.49-62. Birnie, J. (1995) The Possible Effects of Human Bias in Construction Cost Prediction, Proceedings of 11th Annual ARCOM Conference, University of York, Sept., p.359-366 Fortune, C.J. and Lees, M.A. (1996) The relative performance of new and traditional cost models in strategic advice for clients, RICS Research Paper Series, 2(2). Fortune, C.J. and Lees, M.A. (1997) Cognitive errors of judgement, learning style characteristics and clients early cost advisors. Procs 13th Annual ARCOM Conference, Kings College Cambridge, Sept. 556-566 Fortune, C.J. and Lees, M.A. (1998) Strategic cost modelling for construction projects - the role of judgement. Engineering, Architectural and Construction Management (in press) Kahneman, N., Slovic,P., and Tversky, A. (1985) Judgement under uncertainty: heuristics and biases. London; Cambridge University Press Mak, S. and Raftery, J.J. (1992) Risk attitude and systematic bias in estimating and forecasting. Construction Management and Economics, 10, 303-320 Raftery, J.J. (1995) Forecasting and decision making in the presence of risk and uncertainty, Keynote Address to the CIB W55 International Symposium on Economic Evaluation and the Built Environment, Lisbon, 6, 1-11 Tversky, A. and Kahneman, D. (1974) Judgement Under Uncertainty, Heuristics and Biases, Science 185, 1124-1131. 235