Volume 9, Number 3 Handling of Nonresponse Error in the Journal of International Agricultural and Extension Education James R.

Similar documents
Strategic Sampling for Better Research Practice Survey Peer Network November 18, 2011

Best Practices for Survey Research Reports: A Synopsis for Authors and Reviewers

Survey Methods in Relationship Research

Checklist for Prevalence Studies. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Designing Experiments... Or how many times and ways can I screw that up?!?

Doctoral Dissertation Boot Camp Quantitative Methods Kamiar Kouzekanani, PhD January 27, The Scientific Method of Problem Solving

Intro to Survey Design and Issues. Sampling methods and tips

Research Methodologies

HPS301 Exam Notes- Contents

Communication Research Practice Questions

Appraisal tool for Cross-Sectional Studies (AXIS)

THE STRENGTHS AND WEAKNESSES OF RESEARCH METHODOLOGY: COMPARISON AND COMPLIMENTARY BETWEEN QUALITATIVE AND QUANTITATIVE APPROACHES

2013 Supervisor Survey Reliability Analysis

PSYC3010 Advanced Statistics for Psychology

Understanding and Applying Multilevel Models in Maternal and Child Health Epidemiology and Public Health

Effect of Teachers Own Perception of Their Profession on Job Satisfaction and Performance in the Private Primary Schools in Yei Town, South Sudan

Importance of Good Measurement

Test review. Comprehensive Trail Making Test (CTMT) By Cecil R. Reynolds. Austin, Texas: PRO-ED, Inc., Test description

CHAPTER 3 RESEARCH METHODOLOGY

The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation Multivariate Analysis of Variance

Research Design. Source: John W. Creswell RESEARCH DESIGN. Qualitative, Quantitative, and Mixed Methods Approaches Third Edition

Agreement Coefficients and Statistical Inference

RESEARCH METHODOLOGY-NET/JRF EXAMINATION DECEMBER 2013 prepared by Lakshmanan.MP, Asst Professor, Govt College Chittur

PSYCHOMETRIC PROPERTIES OF CLINICAL PERFORMANCE RATINGS

Chapter 1. Research : A way of thinking

Testing for non-response and sample selection bias in contingent valuation: Analysis of a combination phone/mail survey

Chapter 1. Research : A way of thinking

VALIDITY OF QUANTITATIVE RESEARCH

Survey Research Centre. An Introduction to Survey Research

(CORRELATIONAL DESIGN AND COMPARATIVE DESIGN)

Between Fact and Fiction: Artifacts and Ethics in Social Research. Toh Wah Seng ABSTRACT

PSYC3010 Advanced Statistics for Psychology

Clinical Research Scientific Writing. K. A. Koram NMIMR

Measuring and Assessing Study Quality

SOCIOLOGICAL RESEARCH

1. The Role of Sample Survey Design

The Theory-Testing Questionnaire Business Survey Has Passed Its Expiry Date. What Is The Alternative?

PRELIMINARY EXAM EVALUATION FACULTY SCORE SHEET

Biostatistics for Med Students. Lecture 1

Quantitative research Methods. Tiny Jaarsma

aps/stone U0 d14 review d2 teacher notes 9/14/17 obj: review Opener: I have- who has

Fuzzy-set Qualitative Comparative Analysis Summary 1

Design of Experiments & Introduction to Research

Chapter 11: Behaviorism: After the Founding

COMPUTING READER AGREEMENT FOR THE GRE

Handout 16: Opinion Polls, Sampling, and Margin of Error

Vocabulary. Bias. Blinding. Block. Cluster sample

Political Science 15, Winter 2014 Final Review

Adaptive Aspirations in an American Financial Services Organization: A Field Study

You can t fix by analysis what you bungled by design. Fancy analysis can t fix a poorly designed study.

THE DEPARTMENT OF VETERANS AFFAIRS HEALTH SERVICES RESEARCH & DEVELOPMENT SERVICE 810 Vermont Avenue Washington, DC 20420

The Impact of Statistically Adjusting for Rater Effects on Conditional Standard Errors for Performance Ratings

CHAPTER TEN SINGLE-SUBJECTS (QUASI-EXPERIMENTAL) RESEARCH

Research Article Estimating Measurement Error of the Patient Activation Measure for Respondents with Partially Missing Data

PharmaSUG Paper HA-04 Two Roads Diverged in a Narrow Dataset...When Coarsened Exact Matching is More Appropriate than Propensity Score Matching

Research Methods. for Business. A Skill'Building Approach SEVENTH EDITION. Uma Sekaran. and. Roger Bougie

Generalization and Theory-Building in Software Engineering Research

Paper presentation: Preliminary Guidelines for Empirical Research in Software Engineering (Kitchenham et al. 2002)

Tim Johnson Survey Research Laboratory University of Illinois at Chicago September 2011

The Regression-Discontinuity Design

Why Does Sampling Matter? Answers From the Gender Norms and Labour Supply Project

FORUM: QUALITATIVE SOCIAL RESEARCH SOZIALFORSCHUNG

Are you the coach you think you are? Different perspectives on coaching behaviour

Fixed Effect Combining

What s New in SUDAAN 11

Closed Coding. Analyzing Qualitative Data VIS17. Melanie Tory

Authorship Guidelines for CAES Faculty Collaborating with Students

Assessing Limitations and Uses of Convenience Samples: A Guide for Graduate Students

More on Methodological Issues in Free-Response Psi Experiments

Quantitative Research Methods FSEHS-ARC

Parental Attitudes toward Human Papilloma Virus Vaccine Participation of Adolescent Daughters in a Rural Population

STATS8: Introduction to Biostatistics. Overview. Babak Shahbaba Department of Statistics, UCI

Sampling. (James Madison University) January 9, / 13

Validity and Reliability. PDF Created with deskpdf PDF Writer - Trial ::

An Application of Propensity Modeling: Comparing Unweighted and Weighted Logistic Regression Models for Nonresponse Adjustments

Quantitative research Methods. Tiny Jaarsma

in the 21 st Century JPSM Distinguished Lecture April 11, 2008

CHAPTER 3. Methodology

AN ANALYSIS ON VALIDITY AND RELIABILITY OF TEST ITEMS IN PRE-NATIONAL EXAMINATION TEST SMPN 14 PONTIANAK

An International Study of the Reliability and Validity of Leadership/Impact (L/I)

Critiquing Research and Developing Foundations for Your Life Care Plan. Starting Your Search

STATISTICAL CONCLUSION VALIDITY

By Hui Bian Office for Faculty Excellence

PERCEIVED TRUSTWORTHINESS OF KNOWLEDGE SOURCES: THE MODERATING IMPACT OF RELATIONSHIP LENGTH

SURVEY TOPIC INVOLVEMENT AND NONRESPONSE BIAS 1

Sampling for Success. Dr. Jim Mirabella President, Mirabella Research Services, Inc. Professor of Research & Statistics

Linking Assessments: Concept and History

Steps in establishing reliability and validity of need assessment questionnaire on life skill training for adolescents

Assignment 4: True or Quasi-Experiment

Chapter -6 Reliability and Validity of the test Test - Retest Method Rational Equivalence Method Split-Half Method

CHAPTER VI RESEARCH METHODOLOGY

Chapter Three: Sampling Methods

Do Higher Levels of 4-H Leadership Affect Emotional Intelligence?

How many speakers? How many tokens?:

SEMINAR ON SERVICE MARKETING

You must answer question 1.

Psychology, 2010, 1: doi: /psych Published Online August 2010 (

THE RESEARCH PROPOSAL

Glossary of Research Terms Compiled by Dr Emma Rowden and David Litting (UTS Library)

Exploring the Impact of Missing Data in Multiple Regression

Transcription:

DOI: 10.5191/jiaee.2002.09307 Handling of Nonresponse Error in the Journal of International Agricultural and Extension Education James R. Lindner, Assistant Professor Texas A&M University Volume 9, Number 3 Abstract This study was designed to describe and explore how nonresponse in the Journal of International Agricultural and Extension Education historically has been handled. Similar work by Lindner, Murphy, and Briers (2001) was replicated in an effort to describe the generalizability of their findings and applicability of recommendations to this population. All articles (N=87) published in the Journal of International Agricultural and Extension Education during the years 1995 through 1999 were analyzed using content analysis techniques. The findings of this study support those of Lindner, Murphy, and Briers (2001), namely that not mentioning nonresponse error as a threat to external validity of a study, not attempting to control for nonresponse error, or not providing a reference to the literature were unfortunately the norm rather than the exception. Recommendations for handling nonresponse error and minimally acceptable response rates are provided. Introduction In 1983, Miller and Smith wrote an article on handling nonresponse issues. It is one of the most widely cited articles in agricultural and extension education profession, providing researchers with a professionally acceptable and statistically sound method for handling nonresponse as a threat to the external validity of studies that employ sampling techniques. Such efforts to improve our research methods are necessary to insure the objectivity and vigor of research. Subsequently, Miller (1998a) noted that numerous improvements can be made in our research (p.10), and suggested that the profession continue to devote time to renewing, maintaining, and improving our ability to use appropriate research methods and techniques. Improving research in agricultural and extension education requires that we periodically examine our methods and techniques. Miller (1998b, p. 52) also stated, some basic researchers would note that our research is soft, does not have clearly defined objectives or hypotheses, lacks focus and rigor, is not programmatic, and is not sufficiently funded. Such an indictment of our profession is a clarion call to improve, communicate, justify, and defend our research methodologies. As the Association of International Agricultural and Extension Education (AIAEE) comes of age, Miller and Sandman (2000, p. 39) raise questions with respect to AIAEE scholarship: How do we assure scholarly standards?, and How can we assure that new entrants to the field are professionally socialized to contribute to scholarship in AIAEE as well as practices? Social science research has advanced, in part, due to efforts of research designers and statisticians to produce reliable and valid techniques for the measurement of social variables (Ary, Jacobs, & Razavieh, 1996). Measures of characteristics assessed using these techniques, including probabilistic sampling techniques, can be used to estimate parameters of a population. The ability of social science researchers to draw conclusions, generalize results, and make inferences to broader audiences is enhanced by the use of these techniques (Gall, Borg, & Gall, 1996). According to Dillman, (2000) there are four possible sources of error in sample survey research. They are sampling error, coverage error, measurement error, and nonresponse error. As any one of these types of error increases in a survey research study, the results and recommendations of that study become increasingly suspect and decreasingly valuable as evidence of the characteristic in other audiences. Sampling error is a result of the measuring of a characteristic in some, but not all, of the units or people in the population of interest. Sampling error always exists at some level when a random sample is drawn. It is reduced through larger samples, but cannot be completely eliminated. Sampling error is unknown when any of the methods for random selection and assignment of subjects to treatments are violated. Coverage error occurs Fall 2002 55

Volume 9, Number 3 when the list or frame from which the sample is drawn fails to contain all the people in the population of interest. For example, using the dues-paying members of the AIAEE to sample the population of higher education faculty working in international agricultural and extension education would introduce Coverage error. Measurement error is contained in the instrument used to collect the data. Reducing this source of error requires that the researcher use consistent questions and answers that are both clear and unambiguous to the readers. Nonresponse errors exist to the extent that people included in the sample fail to provide usable responses, and are different from those who do on the characteristics of interest in the study. Of these four types of error, nonresponse error has perhaps received the least attention. A recent study by Lindner, Murphy, and Briers (2001) found that not mentioning nonresponse error as a threat to external validity of a study, not attempting to control for nonresponse error, or not providing a reference to the literature were unfortunately the norm and not the exception in articles published during the 1990s in the Journal of Agricultural Education. They concluded that procedures used to address nonresponse error provided strong evidence that either early/late comparison or follow-up with nonrespondents were valid, reliable, and generally well accepted procedures for handling nonresponse error as a potential threat to external validity of research findings. These authors recommended replication of their study for articles published in other scholarly publications and professions to determine if their findings were generalizable and their recommendations applicable to other populations. An additional item of concern not addressed by Lindner, Murphy, and Briers (2001) was minimal response rates needed to ensure that a representative sample was surveyed. To further reduce the threat of nonresponse error it is recommended that a minimum response rate of 50% be achieved (L. E. Miller, personal communication, December 12, 2001; Fowler, 2001; Babbie, 1990). The ability to control for potential nonresponse error is diminished when a response rate of 50% is not achieved. Aiken (1981) provides a formula for calculating the minimum required proportion of returns (p. 1036) needed to ensure responses of respondents are representative of the sample. Review of Procedures for Handling Non- Response Error In their article on handling non-response in survey research, Miller and Smith (1983) stated that Extension evaluators could use one of five general methods for controlling nonresponse error once appropriate follow-up procedures have been carried out: ignore nonrespondents; compare respondents to population; compare respondents to nonrespondents; compare early to late respondents; and double-dip nonrespondents. The authors further state that nonresponse error is a concern for response rates as high as 90%. Gall, Borg, and Gall (1996) suggested that if appropriate follow-up procedures have been carried out and a response rate of less than 80% was achieved, then a random sample of 20 nonrespondents should be contacted ( doubledipped. ) Responses should then be compared with each item of the instrument to determine if nonresponse error is a problem. Ary, Jacobs, and Razavieh (1996) noted that if, after appropriate follow-up procedures have been carried out, a response rate of less than 75% was achieved, the researcher should attempt to describe how respondents might differ from nonrespondents by comparing characteristics of respondents to those of the population, comparing early to late respondents, or comparing respondents to a small random sample of nonrespondents. Similarly, Tuckman (1999) recommended that if fewer than about 80% of people who receive the questionnaire complete and return it, the researcher must try to reach a portion of the nonrespondents and obtain some data from them. Additional returns of all or critical portions of the questionnaire by 5 to 10% of the nonrespondents is required for this purpose (p.267). Purpose The purpose of this line of inquiry was to describe and explore how nonresponse in the Journal of International Agricultural and Extension Education was handled for the years 1995 through 1999. Specific objectives included: 1. Describe the number and type of articles published in the Journal. 2. Describe the sampling procedures used to select research participants in articles published by the Journal. 56 Journal of International Agricultural and Extension Education

3. Describe the response rate reported in research articles published in the Journal. 4. Describe how often nonresponse error as a potential threat to external validity was mentioned in articles published in the Journal. 5. Describe how nonresponse error was controlled for in articles published in the Journal. 6. Describe the literature cited in handling nonresponse error for articles published in the Journal. 7. Describe results from attempts to control for nonresponse error in articles published in the Journal. Methods All articles (N = 87) published in the Journal of International Agricultural and Extension Education during the years 1995 through 1999 were analyzed using content analysis techniques. Data were analyzed using SPSS and appropriate descriptive statistics used. The instrument was developed by Lindner, Murphy, and Briers (2001). Seven coding categories were used to gather data. Type of article was coded as sampling procedures used or sampling procedures not used. Response rate was coded as actual rate achieved. Mentioning of nonresponse error as a possible threat to external validity was coded as mentioned nonresponse, did not mention nonresponse, and a 100% response rate achieved. How nonresponse error was handled was coded into categories proposed by Miller and Smith (1983). Literature cited was coded by actual reference to the literature. Results of efforts to control for nonresponse errors were coded as no differences found, differences found, did not indicate results. Sampling procedures used were coded as one of nine categories. A panel of researchers at Texas A&M University and Texas Tech University established content validity. Each article was independently read and analyzed by two researchers. Researcher analysis of the data was entered onto the data collection instrument. To establish inter rater reliability, results between researchers were compared to determine discrepancies between researchers. Less than one discrepancy per coding category existed. When discrepancies existed the two researchers, working together, reanalyzed the data and reached agreement. Volume 9, Number 3 Findings Objective One The first objective was to describe the number and type of articles published. Eightyseven articles were published in the Journal for the years 1995 through 1999. Approximately 52% (n=52) of articles published in the Journal used sampling procedures. Objective Two As shown in Table 1 the second objective was to describe the sampling procedures used to select research participants in articles published in the Journal during the years 1995 through 1999. The sampling procedures used the most were census (19.2%), stratified sampling (17.3%), and purposive sampling (17.3%). The sampling procedures used the least were cluster sampling (9.6%), Delphi sampling (5.8%), and systematic sampling (3.8%). Two articles did not report sampling procedures. Table 1 Sampling Procedures Used in Articles Published in the Journal of International Agricultural and Extension Education 1995-1999 Sampling Procedure f % Census 10 19.2 Stratified Sampling 9 17.3 Purposive Sampling 9 17.3 Simple Random Sampling 6 11.5 Convenience Sampling 6 11.5 Cluster Sampling 5 9.6 Delphi Sampling 3 5.8 Systematic Sampling 2 3.8 Not Reported 2 3.8 Total 52 100.0 Objective Three The third objective was to describe the response rate reported in research articles published in the Journal during the years 1995 through 1999. Table 2 shows response rates of studies published. The mean response rate was 86.8% (SD = 20.3). The minimum response rate reported was 10% and the maximum rate was 100%. Approximately 40% of the studies reported that a 100% response rate was achieved. Thirteen percent of the studies reported a response rate of 90-99% (f = 7). Almost 20% of the studies did not report a response rate. Fall 2002 57

Volume 9, Number 3 Table 2 Response Rate of Research Articles Published in the Journal of International Agricultural and Extension Education 1995-1999 Response Rate a f % 100% 20 38.5 90 99% 7 13.4 80 89% 5 9.5 70 79% 4 7.6 60 69% 1 1.9 50 59% 2 3.8 Less than 50% 3 5.7 Did not report response rate 10 19.2 Total 52 100.0 Note. a M = 86.8%; SD = 20.3%; Min=10%; Max=100% Objective Four The fourth objective was to describe how often nonresponse error was mentioned as a possible threat to external validity of the study. Table 3 shows that approximately 30% of articles published in the Journal mentioned nonresponse error as a potential threat to external validity. For almost 40% of articles published in the Journal, nonresponse error was not a threat to external validity because a 100% response rate was achieved. Approximately 33% of articles did not mention nonresponse error as a potential threat to external validity. Of the 52 research articles published in the Journal, nonresponse was a threat to external validity of the findings in approximately 62% of the studies. Table 3 Frequency that Nonresponse Error as a Potential Threat to External Validity was Mentioned in Articles Published in the Journal of International Agricultural and Extension Education 1995-1999 Factor f % f % Less than 100% response rate achieved 32 61.5 Mentioned nonresponse 15 28.8 15 46.9 Did not mention nonresponse 17 32.7 17 53.1 Nonresponse a threat to external validity 32 61.5 32 100.0 100% response rate achieved 20 38.5 Mention of nonresponse not necessary 20 38.5 20 100.0 Nonresponse not a threat to external validity 20 38.5 20 100.0 Grand Total 52 100.0 Objective Five The fifth objective was to describe how nonresponse error, in which nonresponse was a potential threat to external validity (f=32), was controlled for in articles published in the Journal. No attempts were made to control for nonresponse error in 21 or the 32 or 65.6% of the articles. In four of these articles nonresponse as a potential threat to external validity was mentioned. In the remaining 11 articles nonresponse error was controlled by comparing early to late respondents in 8 of the studies and following up with nonrespondents in 3 of the studies. Objective Six The sixth objective was to describe the literature cited in handling nonresponse error for articles published in the Journal during the years 1995 through 1999. For studies where nonresponse error was a potential threat to external validity, approximately 80% (f = 26) did not provide a reference to the literature for how nonresponse was or should be handled. Six articles (20%) cited Miller and Smith (1983) for how nonresponse was handled. Objective Seven The seventh objective was to describe the results of attempts to control for nonresponse error in articles published in the Journal during the years 1995 through 1999. No differences were found between respondents and nonrespondents in eight of the fifteen articles that mentioned nonresponse error as a threat to external validity. No differences were found between early and late responses in three of the fifteen articles that mentioned nonresponse error as a threat to external validity. No data were reported in four of fifteen articles that mentioned nonresponse error as a threat to external validity. 58 Journal of International Agricultural and Extension Education

Conclusions To ensure the external validity of a study or generalizability of research findings to the target population, the researcher must satisfactorily answer the question of whether the results of the survey would have been the same if a 100% response rate had been achieved (Richardson, 2000). Controlling for nonresponse error begins with designing and implementing research following generally acceptable protocols and procedures (Dillman, 2000). Appropriate sampling protocols and procedures should be used to maximize participation. Once participation has been maximized, the researcher will have a high enough response rate to conclude nonresponse is not a threat to external validity; or a response rate that warrants additional procedures for ensuring that nonresponse is not a threat to external validity. Seven different general sampling procedures were used to collect data for the 52 articles published in the Journal from 1995 to 1999. Nonresponse error can be a threat to the external validity of a study when any of these sampling procedures are used and less than 100% response rate is achieved. A 100% response rate was achieved in 20 of the articles published in the Journal. Nonresponse, therefore, was a potential threat to external validity in 32 articles. In approximately 53% of these 32 articles, nonresponse error, as a potential threat to external validity, was not mentioned. In almost 65% of these 32 articles, no attempts to control for nonresponse were mentioned. The external validity of those findings is, therefore, unknown. These findings are consistent with those of Lindner, Murphy, and Briers (2001). Of the articles attempting to do so, nonresponse error was handled primarily by comparing early to late respondents or comparing respondents with a sample of nonrespondents. A total of 7 references to the literature were made. During the five years of research covered in this paper, no differences were found to exist between early and late respondents or between respondents and nonrespondents. Early respondents were similar to late respondents and respondents were similar to nonrespondents. As noted throughout this paper, not mentioning nonresponse error as a threat to external validity of a study, not attempting to control for nonresponse error, or not providing a Volume 9, Number 3 reference to the literature were unfortunately the norm and not the exception. To ensure external validity of research findings, statistically sound and professionally acceptable procedures and protocols for handling nonresponse error are needed and should be reported. The results presented in this paper show how nonresponse has been handled in the past. Given these limited results, the findings, and the literature, it is recommended that greater accountability and stricter procedures for handling nonresponse in the future be addressed. It is recommended that a follow-up study of the handling of nonresponse in the Journal of International Agricultural and Extension Education be conducted in five years to describe the reliability and validity of proposed procedures. It is recommended that this study be replicated with articles published in other scholarly publications and with other professions to describe the generalizability of these findings to other populations and applicability of recommendations. Recommendations for Handling Nonresponse Based on the findings of this study and the review of literature, it was concluded that the methods and procedures for handling nonresponse issues proposed by Lindner, Murphy, and Briers (2001) be implemented when less than a 85% response rate and more than a 50% response rate is achieved. Variations from this recommendation should be justified by calculating and reporting, for example, minimum required proportion of returns (Aiken, 1981, p. 1036). The three methods for handling nonresponse errors proposed by Lindner, Murphy, and Briers are comparison of early to late respondents, using days to respond as a regression variable, and compare respondents to nonrespondents. Method 1 Comparison of Early to Late Respondents. One technique to operationally define late respondents is based on responses generated by successive waves of a questionnaire. So, we recommend that late respondents should be defined operationally as those who respond in the last wave of respondents in successive follow-ups to a questionnaire. If the last stimulus does not generate 30 or more responses, the researcher should back up and use responses to the last two stimuli as his or her late respondents. Comparison, then, would be made between early and late respondents on primary variables of Fall 2002 59

Volume 9, Number 3 interest. Only if no differences are found should results be generalized to the target population. If respondents cannot be categorized by successive waves or if a wave of 30 respondents cannot be defined by successive stimuli, then we recommend that late respondents be defined operationally and arbitrarily as the later 50% of the respondents. Method 2 Using Days to Respond as a Regression Variable. Days to respond is coded as a continuous variable, and used as an independent variable in regression equations in which primary variables of interest are regressed on the variable days to respond. If the regression model does not yield statistically significant results, it can be assumed that nonrespondents do not differ from respondents. Method 3 Compare Respondents to Nonrespondents. Comparisons between respondents and nonrespondents and differences found should be handled by sampling nonrespondents, working extra diligently to get their responses, and then comparing their responses to other previous respondents. A minimum of 20 responses from a random sample of nonrespondents should be obtained. If fewer than 20 nonrespondents are obtained, their responses could be combined with other respondents and used in conjunction with method 1 or 2. (p. 51-52) By employing these methods, and then measuring their effectiveness, the profession will verify or refute the utility of the methods in reducing nonresponse error. If these methods for addressing nonresponse error as a threat to external validity of a study are effective, we will continue to use them, if ineffective we will have evidence of that and a deeper understanding of the problem. References Aiken, L. R. (1981). Proportion of returns in survey research. Educational & Psychological Measurement, 41(4), 1033-1038. Ary, D., Jacobs, L., & Razavieh, A. (1996). Introduction to research in education. (5 th ed.). Ft. Worth: Holt, Rinehart, and Winston, Inc. Babbie, E. R. (1990). Survey research methods (2 nd ed.). Belmont, CA: Wadsworth. Dillman, D. A. (2000). Mail and internet surveys: The tailored design method. New York: Wiley. Fowler, F. J., Jr. (2001). Survey research methods (3 rd ed.). Thousand Oaks, CA: Sage. Gall, M. D., Borg, W. R., & Gall, J. P. (1996). Educational research: An introduction (6 th ed.). White Plains, N.Y.: Longman. Kerlinger, F. N. (1986). Foundation of behavioral research. (3 rd ed.). New York: Holt, Rinehart, and Winston. Lindner, J. R., Murphy, T.H., & Briers, G.H. (2001). Handling nonresponse in social science research. Journal of Agricultural Education, 42(4), 43-53. Miller, L. E. (1998a). Appropriate analysis. Journal of Agricultural Education, 39(2), 1-10. Miller, L. E. (1998b). Research-to-practice in a positivistic community. Journal of International Agricultural and Extension Education, 5(2), 51-57. Miller, L. E., & Sandman, L. (2000). A coming of age: Revisiting AIAEE scholarship. Journal of International Agricultural and Extension Education, 7(2), 38-44. Miller, L. E., & Smith, K. L. (1983). Handling nonresponse issues. Journal of Extension, 21(5), 45-50. Richardson, A. J. (2000). Behavioural mechanisms of non-response in mailback travel surveys. Paper presented at the 79 th Annual Meeting of the Transportation Research Board, Washington, DC. Tuckman, B. W. (1999). Conducting educational research (5 th ed.). Fort Worth: Harcourt Brace. 60 Journal of International Agricultural and Extension Education