Exploring Experiential Learning: Simulations and Experiential Exercises, Volume 5, 1978 THE USE OF PROGRAM BAYAUD IN THE TEACHING OF AUDIT SAMPLING

Similar documents
Introduction to Bayesian Analysis 1

A Brief Introduction to Bayesian Statistics

How Does Analysis of Competing Hypotheses (ACH) Improve Intelligence Analysis?

UNLOCKING VALUE WITH DATA SCIENCE BAYES APPROACH: MAKING DATA WORK HARDER

Bayesian Tailored Testing and the Influence

MS&E 226: Small Data

Sawtooth Software. MaxDiff Analysis: Simple Counting, Individual-Level Logit, and HB RESEARCH PAPER SERIES. Bryan Orme, Sawtooth Software, Inc.

Assignment 4: True or Quasi-Experiment

IAASB Main Agenda (February 2007) Page Agenda Item PROPOSED INTERNATIONAL STANDARD ON AUDITING 530 (REDRAFTED)

INTERNATIONAL STANDARD ON ASSURANCE ENGAGEMENTS 3000 ASSURANCE ENGAGEMENTS OTHER THAN AUDITS OR REVIEWS OF HISTORICAL FINANCIAL INFORMATION CONTENTS

Basis for Conclusions: ISA 530 (Redrafted), Audit Sampling

A Case Study: Two-sample categorical data

Introduction to Machine Learning. Katherine Heller Deep Learning Summer School 2018

Assurance Engagements Other than Audits or Review of Historical Financial Statements

CONFLICT MANAGEMENT Fourth Edition

Outline. What s inside this paper? My expectation. Software Defect Prediction. Traditional Method. What s inside this paper?

DON M. PALLAIS, CPA 14 Dahlgren Road Richmond, Virginia Telephone: (804) Fax: (804)

Samantha Wright. September 03, 2003

Introduction to Belief Functions 1

Folland et al Chapter 4

The Common Priors Assumption: A comment on Bargaining and the Nature of War

Statistical reports Regression, 2010

SRI LANKA AUDITING STANDARD 530 AUDIT SAMPLING CONTENTS

Bayesian Analysis of Diagnostic and Therapeutic Procedures in Clinical Medicine

You must answer question 1.

ISA 540, Auditing Accounting Estimates, Including Fair Value Accounting Estimates, and Related Disclosures Issues and Task Force Recommendations

Teaching Job Interview Skills to Psychiatrically Disabled People Using Virtual Interviewers

TTI Personal Talent Skills Inventory Coaching Report

Lec 02: Estimation & Hypothesis Testing in Animal Ecology

IAASB Main Agenda (March 2005) Page Agenda Item. DRAFT EXPLANATORY MEMORANDUM ISA 701 and ISA 702 Exposure Drafts February 2005

Between-word regressions as part of rational reading

Agenetic disorder serious, perhaps fatal without

Enumerative and Analytic Studies. Description versus prediction

Undesirable Optimality Results in Multiple Testing? Charles Lewis Dorothy T. Thayer

SAS 112: Communicating Internal Control Related Matters Identified in an Audit

International Standard on Auditing (UK) 530

Ambiguous Data Result in Ambiguous Conclusions: A Reply to Charles T. Tart

Bayesian and Frequentist Approaches

Minimizing Uncertainty in Property Casualty Loss Reserve Estimates Chris G. Gross, ACAS, MAAA

Generalization and Theory-Building in Software Engineering Research

A Cooperative Multiagent Architecture for Turkish Sign Tutors

By Reuven Bar-On. Development Report

Modeling and Environmental Science: In Conclusion

9-A ISA 260 (Revised and Redrafted) Significant Issues

Sawtooth Software. The Number of Levels Effect in Conjoint: Where Does It Come From and Can It Be Eliminated? RESEARCH PAPER SERIES

INTERNATIONAL STANDARD ON AUDITING (NEW ZEALAND) 530

CS343: Artificial Intelligence

Samantha Wright. September 03, 2003

Contents. 2. What is Attention Deficit Hyperactive Disorder? How do I recognise Attention Deficit Hyperactive Disorder? 7

AUDIT SAMPLING CHAPTER LEARNING OUTCOMES CHAPTER OVERVIEW

Understanding Assignment Questions: Propositions and Assumptions

IAASB Main Agenda (September 2005) Page Agenda Item. Analysis of ISA 330 and Mapping Document

Lecture 9 Internal Validity

Bayesian Estimations from the Two-Parameter Bathtub- Shaped Lifetime Distribution Based on Record Values

Bayesian Inference Bayes Laplace

Cost-Utility Analysis (CUA) Explained

SURVEY OF MAMMOGRAPHY PRACTICE

Empirical Research Methods for Human-Computer Interaction. I. Scott MacKenzie Steven J. Castellucci

We Can Test the Experience Machine. Response to Basil SMITH Can We Test the Experience Machine? Ethical Perspectives 18 (2011):

CPS331 Lecture: Coping with Uncertainty; Discussion of Dreyfus Reading

Cognitive Fallacies in Group Settings. Daniel O Leary University of Southern California

Prompting the Benefit of the Doubt: The Joint Effect of Auditor-Client Social Bonds and Measurement Uncertainty on Audit Adjustments

CHAPTER 3 METHOD AND PROCEDURE

Journal of Political Economy, Vol. 93, No. 2 (Apr., 1985)

SLAUGHTER PIG MARKETING MANAGEMENT: UTILIZATION OF HIGHLY BIASED HERD SPECIFIC DATA. Henrik Kure

Materiality And Risk Essential Pillars Of The Auditor s Work

Remarks on Bayesian Control Charts

ID + MD = OD Towards a Fundamental Algorithm for Consciousness. by Thomas McGrath. June 30, Abstract

How to use the Lafayette ESS Report to obtain a probability of deception or truth-telling

Handout on Perfect Bayesian Equilibrium

THE BAYESIAN APPROACH

Baserate Judgment in Classification Learning: A Comparison of Three Models

THE ENVIRONMENTAL HEALTH AND SAFETY

Analysis of complex patterns of evidence in legal cases: Wigmore charts vs. Bayesian networks

INTRODUCTION TO ECONOMETRICS (EC212)

THE ORGANISATION SELF-ASSESSMENT TOOLS VALUING PEOPLE 47

For general queries, contact

Hierarchy of Statistical Goals

Knowledge is rarely absolutely certain. In expert systems we need a way to say that something is probably but not necessarily true.

Patient and public involvement. Guidance for researchers

TESTING at any level (e.g., production, field, or on-board)

Preparing for an Oral Hearing: Taxi, Limousine or other PDV Applications

Evaluating the Causal Role of Unobserved Variables

Hypothesis Testing. Richard S. Balkin, Ph.D., LPC-S, NCC

John Quigley, Tim Bedford, Lesley Walls Department of Management Science, University of Strathclyde, Glasgow

Reliability of feedback fechanism based on root cause defect analysis - case study

Ordinal Data Modeling

IEEE SIGNAL PROCESSING LETTERS, VOL. 13, NO. 3, MARCH A Self-Structured Adaptive Decision Feedback Equalizer

Response to the ASA s statement on p-values: context, process, and purpose

Unit 1 Exploring and Understanding Data

Choose a fertility clinic: update

On some misconceptions about subjective probability and Bayesian inference Coolen, F.P.A.

2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science

A Game Theoretical Approach for Hospital Stockpile in Preparation for Pandemics

Bayes Theorem Application: Estimating Outcomes in Terms of Probability

DECISION ANALYSIS WITH BAYESIAN NETWORKS

Emotion Recognition using a Cauchy Naive Bayes Classifier

Principles of publishing

Bayesian Confidence Intervals for Means and Variances of Lognormal and Bivariate Lognormal Distributions

Understanding Minimal Risk. Richard T. Campbell University of Illinois at Chicago

Transcription:

THE USE OF PROGRAM BAYAUD IN THE TEACHING OF AUDIT SAMPLING James W. Gentry, Kansas State University Mary H. Bonczkowski, Kansas State University Charles W. Caldwell, Kansas State University INTRODUCTION TO THE PROBLEM Statistical sampling is taught in Auditing classes at most academic institutions, but most CPA s with whom we have talked have little understanding of the sampling process. This paper discusses the use of Bayesian statistics in the determination of the sample size, which is not in itself novel. However, the interactive program BAYAUD and the Big SAC Case, which was written to go with it, are somewhat novel. Classical sampling techniques have dominated the content of most discussions of sampling in auditing. The classical approach relies on the standard error formula, which requires that the variance of the population (or the sample proportion) be estimated in order to compute the sample size. This requirement becomes a major criticism, for most students and businessmen consider it counterintuitive that one must assume what the results of the audit will be in order to develop a sample design to provide these results. Another weakness of the classical approach is that it fails to consider explicitly the tradeoff between the accuracy of the added information and the cost of obtaining that information. The Bayesian approach to sample size determination is at least as subjective, if not more so, than the classical approach. However, it makes this subjectivity explicit and provides more basis for the discussion of how likely events are, and what action to take after the information has been assembled. Proponents of the use of Bayesian statistics in auditing have been numerous. It has been argued that the auditor should formally incorporate his impression of a firm s internal control, gained from his past experience, with the sample information accumulated during the audit in order to get his best opinion of the current status of the firm. Further, several authors (Kraft, 1968; Smith, 1972; Sorenson, 1969; Tracy, 1969) stated that the amount of sample evidence needed should depend on the auditor s past experience with the firm. Much of the literature on the Bayesian approach has viewed the auditing problem as one with discrete states of nature, i.e., there are only a finite number of error rates possible. For example, error rates of.001,.01,.05, and.10 may be hypothesized and the auditor assesses the prior probabilities of these rates. Likelihood probabilities are calculated from the sample data, as Bernouilli sampling is assumed; in general, the likelihood would be the probability of finding r errors given a 53

sample of n transactions and the error rate p. The likelihoods and the priors are combined via Bayes theorem in its discrete form. One problem with this formulation is the great number of computations involved in the determination of the optimal sample size. Recent research has dealt with the development of efficient algorithms to search for the optimal sample size) and computer programs have been developed to handle the problem. Despite these advancements, other problems exist with the discrete formulation. A more realistic approach to the auditing problem is to let the estimated error rate take on an infinite number of values between the minimum and maximum possible or reasonably expected error rates. Francisco (1974) makes a strong case for the use of a continuous rather than discrete approach; specifically he proposed using the family of beta distributions as the conjugate prior distributions to the Bernouilli data generating process. The concept of conjugate prior distributions greatly eases the computational burden involved in the continuous version of Bayes theorem. The beta distribution appears to be very similar to the binomial distribution, only the uncertain parameter is ~ (the error rate) rather than ~ (the number of defective items). The form of the beta distribution is with the requirement that n>r>0. The mean and variance of a beta distribution with parameters r and n are (1) and (2) If we can model our prior distribution by a beta distribution with parameters r and n and if the sample information is generated from a Bernouilli process with r defectives in n items, the posterior density will also be a member of the beta family with parameters r and n. Further, r will be equal to r +r and n will be equal to n +n. A proof of this can be seen in Winkler (1972, pp. 157-8). Note that single primes on the parameters r and n denote parameters of the prior distribution, while double primes denote parameters of the posterior distribution. When neither single or double primes are present, we are referring to the sample information. The shape of the prior beta distribution depends upon the choice of r and n. Given that the auditor provides an estimate of p, there are still an infinite number of values of r and n that will yield the same value of E (p r,n ). Suppose that the auditor believes that the actual error rate in a company is five 54

percent. As can be concluded by studying the expression for the variance given in equation 2, the variance decreases as n increases. For example, the variance of the beta distribution with parameters r = 1 and n = 20 is.0023, with parameters r = 6 and n = 120 is.0004. The auditor could intuitively view n as an actual sample of data of size n ; the more confidence he has in his prior judgment, the larger n he should select, resulting in a prior beta distribution with small variance and increasing the relative weight assigned to his priors in relation to the sample information. The flexibility provided by the choice of r and n would alleviate conflicts such as those discussed by Ward (1975) and Corless (1975), in which Ward disagreed with Corless (1972, p. 563) statement that it seems very appropriate for the auditor to disregard the nonsampling evidence when it is contradicted by the more objective sampling evidence. Those who agree with Ward that both sample evidence and prior judgment are fallible could weight their prior distribution more heavily by choosing larger values for r and n (while keeping the ratio r n constant) than would those who agree with Corless. SIMPLIFYING THE USE OF THE CONTINUOUS MODEL Few auditors have had any experience with the beta distribution and, consequently, training auditors to work with it is a large-scale undertaking. Steps to simplify the process need to be taken, and one such step is the interactive program by Blocher and Robertson [1]. Their program requires the estimation of p; a crude estimation of the variance of the prior distribution ( low variance : n = 150, r = n x p ; moderate variance : n = 15O, r = n x p; high variance : n = 50, r = n x p), the maximum acceptable error rate, and the sample data (r and n). The output from the program is the posterior probability that the error rate is less than or equal to the maximum acceptable. eror rate. The primary advantage of the Blocher and Robertson program, in our opinion, is that it simplifies the assessment of the prior beta distribution. Clearly it allows the statistically unsophisticated user to understand the potential of the Bayesian approach in audit sampling. However, the simplification also serves to reduce the real-world applicability of the program. Only three different degrees of certainty are possible--very sure, sure, and not so sure. These are translated into the following values for n : 150, 100, and 50. Setting n to be only one of these three values seems to be rather arbitrary and diminishes the flexibility to be derived from using the beta distribution. By allowing n to take on only three possible values, the ability of the program user to vary the shape of the distribution to reflect his prior information is quite limited. Another limitation of the program is that it does not provide feedback concerning the worth of the sample. One of the most important benefits of the Bayesian approach is that it allows the contrast of the value of information to its cost. The emphasis on time budgets by CPA firms clearly indicates concern for the 55

cost of sampling. Is the sample worth its cost? What sample size will provide most gain (value-cost)? The Bayesian concepts of Expected Net Cain from Sampling (ENGS), Expected Value of Sample Information (EVSI), and Cost of Sampling (CS) provide the auditor with the desired measures. However, unlike the computations involved in the revision of the prior beta distribution, the computation of EVSI in the continuous case is quite complex. PROGRAM BAYAUD: AN EXTENSION OF THE BLOCHER AND ROBERTSON APPROACH Our efforts have been with the extension of the Blocher and Robertson [1] program to cope with the limitations discussed above. First, we have added an option to allow the auditor to input his own measure of the uncertainty in his original estimate of the error rate (n ). The auditor who feels comfortable with the beta distribution thus can choose any one of the infinitely possible prior distributions. Second, we have added the capability of determining the Expected Values of Perfect Information (EVPI) and EVSI. Calculation of the two amounts requires some additional inputs. The problem is structured so that the auditor has the choice of two options, either to inspect 100% of the items under consideration or to accept as is. This formulation is similar to that illustrated by Sorenson (1969). The payoff or loss functions of both actions are assumed to be linear. The additional cost data needed include an estimate of the fixed costs to inspect plus the variable cost per item inspected, whether that item is in error or not. In addition an estimate of the cost incurred per error discovered is needed. Sorensen (1969) refers to this cost as the cost of reworking errors, that is, the cost incurred by the auditor himself or provoked within the firm by the discovery of a defective item. The auditor also needs to estimate the costs associated with an undetected error, such as the economic consequences of possible client dissatisfaction or even a potential lawsuit. The calculation of EVPI requires the use of the cumulative beta probabilities and the determination of EVSI requires several calculations of beta-binomial probabilities. These calculations are extremely time consuming to compute by hand, particularly if r and n are fairly large. The value of the information generated might not be worth the time involved, so it is best to let a computer make the necessary calculations. In this way it is quite simple to vary the inputs used to calculate EVPI and EVSI to see how sensitive they are to such changes. Having some idea of this sensitivity is important because of the highly imprecise nature of some of the inputs. Although no optimal sample size is determined here, the auditor can easily try different sample sizes along with changes in the cost of inspection to get some idea of what happens to EVSI. 56

This program and the Bayesian approach are also useful in that if the probability of the error rate being less than the maximum acceptable error rate is not satisfactory to the auditor, he may decide to take another sample. If he does so, the new sample taken can easily be incorporated into the previous information by running the program a second time. The r and n values of the first run become r and n of the second run, and the auditor can determine through EVSI and the new probability figure whether the second sample is worth the additional costs incurred. CLASSROOM USE OF BAYAUD: THE BIG SAC CASE The Big SAC Case was written as a vehicle for implementing BAYAUD in a case situation. It was designed to be complementary to the Big MAC case which is used to illustrate an example with discrete priors; this approach was discussed in Gentry, Burns, and Reutzel (1975). The introduction to the case discusses briefly the problems that one CPA firm is having in determining time budgets in general and in determining the number of transactions to sample in particular. The specific details of the case deal with the auditing of a firm s accounts payable transactions. The firm has experienced some internal control problems in the past. Program BAYAUD is suggested as the appropriate vehicle for determining whether to sample or not and, if so, what sample size to use. The inputs to program BAYAUD are: 1. An estimate for the average error rate among the transactions, which is dependent on prior experience and the strength of the internal control system of the firm. 2. A weight to be placed on your prior estimate of the error rate (n ). The value of n may be input directly or you may use a less-sophisticated approach, choosing very sure (low variance, n =150), sure (moderate variance, n =100), or not so sure (high variance, n 50). 3. The maximum acceptable error rate, which is a function of the dollar magnitude of each error, the ultimate cost of each undetected error, and the potential for material misstatement of the account balances. 4. The total number of transactions that would be investigated in a 100% sampling plan (this number was provided). 5. The cost that would be charged for sampling 100% of the transactions. 6. The cost of correcting each error discovered. 7. The estimated long-run cost associated with each undetected error. 8. The proposed sample size, which should be dependent on the internal control system, materiality of transactions, prior experience, and the degree of certainty about the average error rate. 9. A possible sample outcome (the number of errors in this sample). 57

The outputs from program BAYAUD are: 1. The probability that the error rate is less than or equal to the maximum acceptable error rate. This probability is calculated by combining the prior error rate with the sample results. 2. The expected value of perfect information. 3. The expected value of sample information for the supplied sample size. 4. The cost of the sample. 5. The expected net gain from sampling for this sample size. The students are to develop a sampling plan for the accounts payable part of the audit. Also, since this is just one small segment of the auditor s work, they are to generalize to a wide variety of audits by listing general guidelines on how to determine the inputs to program BAYAUD. A key to the success of the efforts will be the justification of the inputs. They are encouraged to interact with faculty, students with auditing experience, or practicing accountants in order to obtain some support for the cost values. They are to perform analysis to see how sensitive the outputs are to changes in the inputs. CLASSROOM EXPERIENCE The case has been used in a graduate Decision Theory class composed of a mix of accounting and non-accounting majors. Only the accounting students were required to write a report on the case. Most students found the case to simplify the understanding of the Bayesian approach using a continuous prior distribution. The discussion of the approach in the text (Jones, 1977, pp. 218-233) is quite complicated; the case certainly helped to make it understandable. One obvious change is that the task involved should have been to design a sampling procedure for accounts receivable rather than accounts payable. Realistically, there is much less tendency to pad accounts payable than accounts receivable, so that serious errors in the auditing of accounts payable may involve the failure of discovering a missing transaction rather than finding an erroneous one. Probably the most difficult part of the case for the students was providing the justification of the inputs. The task was ambiguous for all of the accounting students, but especially so for those with no experience as a CPA. But we argue that this is probably one of the most valuable aspects of the case, as the students were forced to look at the audit in terms of its time costs to the firm being audited and in terms of its potential costs to the CPA firm if the audit is not satisfactory. Consequently, the case not only helps the student understand the Bayesian approach to the audit sample size determination, but it also forced him/her to look at the auditing process from a somewhat different viewpoint. 58

REFERENCES 1. Blocher, Edward and Robertson, Jack C., Bayesian Sampling Procedures for Auditors: Computer-Assisted Instruction, The Accounting Review, (April 1976), pp. 359-363. 2. Corless, John C., Assessing Prior Distributions for Applying Bayesian Statistics in Auditing, The Accounting Review (July 1972), pp. 556-566. 3. Corless, John C., Comment on Assessing Prior Distributions for Applying Bayesian Statistics in Auditing: A Reply, The Accounting Review (January 1975), pp. 158-159. 4. Francisco, Albert K., The Application of Bayesian Statistics to Auditing: Discrete Versus Continuous Prior Distributions, Working paper 74-1, Center for Business Research and Services, College of Business, Idaho State University, (December 1974). 5. Gentry, James W., Burns, Alvin C., and Reutzel, Edward T., The Use of Program BAYES in t~he Teaching of Sample Size Determination in Survey Research. In Richard H. Buskirk, Simulation Games and Experiential Learning in Action, Proceedings, 2nd ABSEL Conference, Bloomington, Indiana, (April 1975), pp. 73-80. 6. Jones, J. Morgan, Introduction to Decision Theory, Homewood, Illinois: Richard D. Irwin, Inc., 1977. 7. Kraft, William H., Jr., Statistical Sampling for Auditors: A New Look, The Journal of Accountancy (August 1968), pp. 49-56. 8. Smith, Kenneth A., The Relationship of Internal Control Evaluation and Audit Sample Size, The Accounting Review (April 1972), pp. 201-209. 9. Sorensen, James E., Bayesian Analysis in Auditing, The Accounting Review (July 1969), pp. 555-556. 10. Tracy, John A., Bayesian Statistical Methods in Auditing, The Accounting Review (January 1969), pp. 90-98. 11. Ward, Bart J., Assessing Prior Distributions for Applying Bayesian Statistics in Auditing: A Comment, The Accounting Review (January 1975), pp. 155-157. 12. Winkler, Robert L., Introduction to Bayesian Inference and Decision, New York: Holt, Rinehart, and Winston, Inc., 1972. 59