Threats to Validity in Evaluating CME Activities
|
|
- Alexia Oliver
- 6 years ago
- Views:
Transcription
1 Threats to Validity in Evaluating CME Activities Jason King Baylor College of Medicine Citation: King, J. E. (2008, January). Threats to Validity in Evaluating CME Activities. Presented at the annual meeting of the Alliance for Continuing Medical Education, Orlando, FL. Please do not disseminate or adapt without express permission of the author. Thank you. Introduction CME activity evaluation can be conceptualized as a research study because it includes: Design (how the study is conducted) Instrument(s) Analysis of data to draw inferences about the effect of an intervention or treatment (ie, the educational activity) Each component can be affected by bias GOAL: Minimize Bias Increase Validity Ability to Draw More Plausible Conclusions 1
2 Introduction Objectives for Participants: 1. Be aware of the importance of collecting valid data 2. Begin to think about ways to minimize the effects of bias 3. Keep things simple Much can be learned by applying a simple, yet effect research design...and fully describing the data using simple summary statistics! Introduction There are potential biases associated with each component of a study: 1. Study Design Issues related to Internal Validity Issues related to External Validity 2. Instrument Design Issues related to Construct Validity 3. Data Analysis Issues related to Statistical Conclusion Validity 2
3 Study Design Internal Validity Issues Internal Validity: To what degree is the study designed such that we can infer that the treatment caused the measured effect? EXAMPLE: Did participation in a CME program change physician prescribing practices? An internally valid study will minimize the influence of extraneous variables Study Design Internal Validity Issues Types of Designs: 1. One Group Posttest Design X E X = Implementation of the treatment E = Measurement of subjects in experimental group 3
4 Study Design Internal Validity Issues Types of Designs: 1. One Group Posttest Design X E NOT a true experiment Perhaps most common design in CME, yet this design is most likely to produce biased results: Why? Study Design Internal Validity Issues Types of Designs: 2. One Group Pretest/Posttest Design E 1 X E 2 X = Implementation of the treatment E = Measurement of subjects in experimental group 4
5 Study Design Internal Validity Issues Types of Designs: 2. One Group Pretest/Posttest Design E 1 X E 2 NOT a true experiment Potentially offers less biased results because each subject serves as their own control Study Design Internal Validity Issues Types of Designs: 3. Comparison Group Posttest Design X E C X = Implementation of the treatment E = Measurement of subjects in experimental group C = Measurement of subjects in comparison group 5
6 Study Design Internal Validity Issues Types of Designs: 3. Comparison Group Posttest Design X E C NOT a true experiment Comparison group potentially controls some threats to validity, but is not randomly assigned Study Design Internal Validity Issues Types of Designs: 4. Comparison Group Pretest/Posttest Design E 1 X E 2 C 1 C 2 X = Implementation of the treatment E = Measurement of subjects in experimental group C = Measurement of subjects in comparison group 6
7 Study Design Internal Validity Issues Types of Designs: 4. Comparison Group Pretest/Posttest Design E 1 X E 2 C 1 C 2 NOT a true experiment Comparison group potentially controls some threats to validity Pre existing differences can be measured Study Design Internal Validity Issues Types of Designs: 5. Control Group Posttest Design X E 1 R C 1 X = Implementation of the treatment E = Measurement of subjects in experimental group C = Measurement of subjects in control group R = Randomization to treatment groups 7
8 Study Design Internal Validity Issues Types of Designs: 5. Control Group Posttest Design X E 1 R C 1 TRUE experiment because of randomization Pre existing differences can be minimized through randomization Study Design Internal Validity Issues Types of Designs: 6. Control Group Pretest/Posttest Design E 1 X E 2 R C 1 C 2 X = Implementation of the treatment E = Measurement of subjects in experimental group C = Measurement of subjects in control group R = Randomization to treatment groups 8
9 Study Design Internal Validity Issues Types of Designs: 6. Control Group Pretest/Posttest Design E 1 X E 2 R C 1 C 2 TRUE experiment because of randomization Offers strong control and measurement of preexisting differences Study Design Internal Validity Issues Types of Designs: 6. Control Group Pretest/Posttest Design EXAMPLE: Comparison of online CME vs. traditional, live CME Internet novices were expected to gravitate away from online CME, so subjects were randomly assigned 9
10 Study Design Internal Validity Issues Study Design Internal Validity Issues 1. HISTORY: Events that occur in the subjects environment that may affect the outcome Scenario A CME program is held that targets specific changes in physician prescribing practices 10
11 Study Design Internal Validity Issues 1. HISTORY: Events that occur in the subjects environment that may affect the outcome Potential Bias During the study period, a series of meetings are offered nationally (external to the study) to increase the same prescribing practices, which artificially increases the treatment effect Study Design Internal Validity Issues 1. HISTORY: Events that occur in the subjects environment that may affect the outcome Minimize Bias Include a control group (eg, individuals who did not participate in the CME program) Report results (eg, means) disaggregated by the variables that may be confounding the relationship (eg, attendance at one of the meetings associated with the initiative) 11
12 Study Design Internal Validity Issues 1. HISTORY: Events that occur in the subjects environment that may affect the outcome When possible, always plan to include measures of any variables that may potentially cause bias! Study Design Internal Validity Issues 2. TESTING EFFECTS: Changes in what is being measured brought about by the reaction to the process of measurement Scenario A test is administered before and after an online CME activity to measure change in knowledge levels 12
13 Study Design Internal Validity Issues 2. TESTING EFFECTS: Changes in what is being measured brought about by the reaction to the process of measurement Potential Bias Using the same items on both instruments cues subjects into the topics that will be assessed on the posttest resulting in an artificially increased treatment effect Can affect attitudinal measures as well Study Design Internal Validity Issues 2. TESTING EFFECTS: Changes in what is being measured brought about by the reaction to the process of measurement Minimize Bias Include a comparison group not receiving the pretest Create a parallel posttest However, a Psychometrician may be needed (ie, to ensure that items are of similar difficulty and discrimination) 13
14 Study Design Internal Validity Issues 3. INSTRUMENTATION: Changes in the attributes of the measuring instrument or procedure take place during the study Scenario Physicians are asked to rate their office personnel before and after a systems based educational intervention aimed at improving communication skills Study Design Internal Validity Issues 3. INSTRUMENTATION: Changes in the attributes of the measuring instrument or procedure take place during the study Potential Bias Physician may be disposed to give more favorable ratings the second time because they expect (consciously or subconsciously) a change to have occurred 14
15 Study Design Internal Validity Issues 3. INSTRUMENTATION: Changes in the attributes of the measuring instrument or procedure take place during the study Minimize Bias Have an external observer rate the office personnel, perhaps without knowledge of when the intervention takes place (blinding) Study Design Internal Validity Issues 3. INSTRUMENTATION: Changes in the attributes of the measuring instrument or procedure take place during the study Scenario #2 A self report instrument is administered before and after the CME activity to determine changes in self assessed confidence levels 15
16 Study Design Internal Validity Issues 3. INSTRUMENTATION: Changes in the attributes of the measuring instrument or procedure take place during the study Potential Bias The intervention changes the subject s evaluation standard with regard to the dimension measured resulting in incommensurate pre/post data (Response Shift Bias) Study Design Internal Validity Issues 3. INSTRUMENTATION: Changes in the attributes of the measuring instrument or procedure take place during the study Minimize Bias Administer a post activity survey using a retrospective assessment : Confidence in ability to differentially diagnose vascular dementia from other forms of cognitive dysfunction No Some High Very High Confidence Confidence Confidence Confidence Before Activity: After Activity: Absent from related presentation(s) Not applicable to my practice 16
17 Study Design Internal Validity Issues 4. SELECTION: Bias occurring when naturally existing groups are studied (eg, volunteers) Scenario Physicians choosing to attend a CME activity are surveyed and tested after the activity, with results compared to data obtained from a group of physicians who elected not to attend the activity Study Design Internal Validity Issues 4. SELECTION: Bias occurring when naturally existing groups are studied Potential Bias Physicians who chose not to attend the activity had little interest in the subject matter and thus entered the study with lower motivation and knowledge levels 17
18 Study Design Internal Validity Issues 4. SELECTION: Bias occurring when naturally existing groups are studied Minimize Bias Administering a pretest will help to determine whether or not selection bias is a problem, but will not solve the problem Offer the activity at two time periods, randomly assigned volunteers to attend either the first or second activity (the latter serves as control group) Study Design Internal Validity Issues 5. DIFFERENTIAL ATTRITION: When subjects who drop out differ in important ways from the remaining subjects (related to Non Response Bias) 18
19 Study Design Internal Validity Issues To Strengthen Internal Validity: 1. Use a comparison group to control some threats to validity 2. Use random assignment to groups to equally distribute prior differences between individuals Does not always work, so differences should also be measured Study Design Internal Validity Issues To Strengthen Internal Validity : 3. Use statistical analysis to control for differences on relevant background variables. In spite of frequent usage, this approach is not optimal 1! 1 Loftin, L. B., & Madison, S. Q. (1991). The extreme dangers of covariance corrections. In B. Thompson (Ed.), Advances in educational research: Substantive findings, methodological developments (Vol. 1, pp ). Greenwich, CT: JAI Press. Miller, G. A. & Chapman, J. P. (2001). Misunderstanding Analysis of Covariance. Journal of Abnormal Psychology, 110(1),
20 Study Design External Validity Issues External Validity: Are the results generalizable outside the context of the study? Question to Keep in Mind: Would effects observed for a CME activity generalize to any physician who might participate in a similar future activity? What limitations should be considered? 20
21 Study Design External Validity Issues Factors that may contribute to lack of generalizability: Age Gender Education Occupational goals and interests; motivation Volunteer status Study Design External Validity Issues Scenario Physicians attend a CME activity and are asked to complete an outcomes assessment at post activity and again 3 months later, but fewer complete the follow up assessment Potential Bias Those subjects who made fewer practice changes chose not to complete the assessment 21
22 Study Design External Validity Issues Minimize Bias Include a follow up request and offer incentives to increase response rate Study Design External Validity Issues Strategies Shown to Improve Response Rate: Incentives Monetary incentive vs. no incentive Incentive with questionnaire vs. incentive on return Length Shorter vs. longer questionnaire Edwards, P., et al. Increasing response rates to postal questionnaires: systematic review, BMJ, 324 (May 2002). 22
23 Study Design External Validity Issues Strategies Shown to Improve Response Rate: Appearance Colored ink vs. standard [slight effect] More personalized vs. less personalized [slight effect] Delivery Recorded delivery vs. standard Stamped return envelope vs. business reply [slight effect] First class outward mailing vs. other class Study Design External Validity Issues Strategies Shown to Improve Response Rate: Contact Pre contact vs. no pre contact Follow up vs. no follow up Content More interesting vs. less interesting 23
24 Study Design External Validity Issues To Establish Trust To Increase Rewards To Reduce Social Costs... Provide token of appreciation in advance Sponsorship by legitimate authority Make the task appear important Invoke other exchange relationships Show positive regard Say thank you Ask for advice Support group values Give tangible rewards Make the questionnaire interesting Give social validation Communicate scarcity of response opportunities Avoid subordinating language Avoid embarrassment Avoid inconvenience Make questionnaire short and easy Minimize requests to obtain personal information Emphasize similarity to other requests Dillman, D.A. (2000). Mail and Internet Surveys: The Tailored Design Method, Second Edition. New York: John Wiley. Study Design External Validity Issues Minimize Bias (cont.) Randomly select a sample of dropouts and diligently attempt to collect data from them If results for dropouts and respondents are similar, non response bias is less likely 24
25 Study Design External Validity Issues Minimize Bias Compare the dropouts and respondents on other available measures such as demographics or (better yet) proxy measures related to the outcomes of interest However, be sure that the variable in question is related to the outcome variable! Study Design External Validity Issues Minimize Bias EXAMPLE: Paper published in a 2007 CME journal compared demographic data for 3 groups A demographic difference was reported as a limitation of the study, but not examined in relation to the outcomes of interest May not have been a limitation at all Perhaps the groups also differed on eye color! 25
26 Study Design External Validity Issues Minimize Bias If data are collected anonymously, use a linking code to match respondents and dropouts, and compare their responses to the initial assessment Example: a. 4-digit month/day of birth (e.g., Jan. 15 = 01/15): / b. 2-digit year of graduation from medical school (e.g., 1973 = 73): c. First 3 letters of city in which you attended medical school (e.g., El Paso = ELP): 26
27 Instrument Design Construct Validity Issues Construct Validity: To what extent do the items measure the presumed theoretical construct(s)? Could be attitudes, satisfaction, knowledge, behaviors, skills, etc. Instrument Design Construct Validity Issues Construct validity can be viewed as encompassing: Face Validity The extent to which an item/instrument appears to actually measure what it is supposed to measure Weakest of the four types because assessment is not supported by any empirical evidence Not an assessment of validity in the technical sense 27
28 Instrument Design Construct Validity Issues Construct validity can be viewed as encompassing: Content Validity The extent to which the items cover a representative sample of the content domain of interest Content experts are often consulted in item development to ensure content validity (eg, using Bloom s taxonomy) Instrument Design Construct Validity Issues Construct validity can be viewed as encompassing: Criterion Validity The extent to which items correlate with items on other instruments that measure the same or different construct(s) Criterion validity exists if high/low correlations emerge as expected 28
29 Instrument Design Construct Validity Issues 1. EXTREMITY BIAS: Avoiding extreme responses Minimize Bias Use more moderate answer options Frequently used scale: Strongly Neither Agree Strongly Disagree Disagree Nor Disagree Agree Agree Instrument Design Construct Validity Issues 2. FLOOR/CEILING EFFECTS: Compression of scores at the top or bottom of the scale Minimize Bias Spread out the score distribution Original Scale: Poor Fair Good Excellent 29
30 Instrument Design Construct Validity Issues Frequency (f) Poor Fair Good Excellent Instrument Design Construct Validity Issues 2. FLOOR/CEILING EFFECTS: Compression of scores at the top or bottom of the scale Minimize Bias Spread out the score distribution Revision #1: Poor Fair Good Very Good Excellent 30
31 Instrument Design Construct Validity Issues 2. FLOOR/CEILING EFFECTS: Compression of scores at the top or bottom of the scale Minimize Bias Spread out the score distribution Revision #2: (after listing our expectations) Instrument Design Construct Validity Issues 2. FLOOR/CEILING EFFECTS: Compression of scores at the top or bottom of the scale Minimize Bias Spread out the score distribution Example #2: 7 point knowledge rating changed to a 10 point rating (assessed pre, post, 3 month followup) 31
32 Instrument Design Construct Validity Issues Knowledge--Before Instrument Design Construct Validity Issues Knowledge--After 32
33 Instrument Design Construct Validity Issues Knowledge--Follow-Up Instrument Design Construct Validity Issues 3. ACQUIESCENCE AND SOCIAL DESIRABILITY: Agreeing with all questions or desiring to create a favorable impression Minimize Bias Reverse code items Anonymity, confidentiality The likelihood of obtaining biased data is high for non anonymous surveys! 33
34 Instrument Design Construct Validity Issues 4. RESPONDENT FATIGUE: Tiring when answering questions Minimize Bias Use fewer items, with more important items first RATING SCALE: E = Excellent G=Good F= Fair P= Poor Presenter Topic Content Delivery Course Audio- Overall Materials Visuals EGFP EGF P EGFP EGFP EGFP 34
35 Data Analysis Statistical Conclusion Validity Issues Statistical Conclusion Validity: To what degree does the statistical analysis allow one to draw the correct conclusions? Data Analysis Statistical Conclusion Validity Issues Potential Biases and Threats to Validity 1. Lack of Power Statistical power is the ability to detect relationships between variables that truly exist Need more power to find a needle in a haystack than to find a MACK truck in a haystack Important consideration if you wish to draw a sample (eg, potential CME participants) 35
36 Data Analysis Statistical Conclusion Validity Issues Potential Biases and Threats to Validity 1. Lack of Power Power is reduced through: a. Small sample size b. Unreliable measures c. Violating the assumptions of the statistical test Data Analysis Statistical Conclusion Validity Issues Potential Biases and Threats to Validity 2. Apply Inappropriate Statistical Tests 36
37 Data Analysis Statistical Conclusion Validity Issues Potential Biases and Threats to Validity 2. Apply Inappropriate Statistical Tests Important issue is how the items are scaled (eg, dichotomy, Likert scale, unordered categories) Parametric tests are preferred, when applicable Conclusions Bias can affect the validity of a study at any point Study Design Item Development Data Analysis Taking proactive steps to reduce bias at each step will result in more valid assessments of CME effects Should also assess Reliability (eg, Cronbach s alpha, test/retest) END RESULT: Improved CME activities 37
38 Additional Slides Instrument Design Construct Validity Issues Questionnaire Item Writing Tips Use only one question per issue (avoid double barreled questions) Use mutually exclusive response categories Questions with more than one embedded concept are difficult to answer, and thus impossible to interpret The use of and or or often indicates a double barreled item Use simple language Adapted from Wildes, Kimberly R. MEI s Hints for Writing Effective Survey Items, Measurement Excellence Initiative Site, available Internet; Accessed Jan. 7, 2004.; and other sources. 38
39 Instrument Design Construct Validity Issues Questionnaire Item Writing Tips Use the fewest words and simplest grammatical structure possible avoid compound sentences Minimize respondent reading time in phrasing each item Keep vocabulary consistent with the respondent s level of understanding Avoid, or use sparingly, the phrase all of the above Avoid, or use sparingly, the phrase none of the above Avoid the use of the phrase I don t know Instrument Design Construct Validity Issues Questionnaire Item Writing Tips Avoid negatively worded items Some contend that alternating between negatively and positively worded items helps to reduce response bias (responding similarly to every item). However, negatively worded items have the propensity to load on a single, separate factor. Further, they are easily misunderstood. In general, negative items should be avoided, especially when strongly disagree to strongly agree response categories are used 39
40 Instrument Design Construct Validity Issues Questionnaire Item Writing Tips Do not use leading questions Leading questions result in bias, even if unintended Questions should be fair to the respondent and not one sided Avoid ambiguous words (i.e., occasionally, regularly) Avoid extreme words (i.e., always, all, never, ever) Instrument Design Construct Validity Issues Pretest the Items Eliminates complex or technical questions Ensures face validity Ensures that item, length, and placement are appropriate Effective at revealing items that may have double meanings or other problems Often facilitates changing open to closed ended questions 40
41 Instrument Design Construct Validity Issues Approaches to Pretesting Think Aloud Have respondents report aloud what they are thinking as they work through the test items Immediate Recall Immediately after choosing a response, have respondents describe why they chose that response Criteria Probe After respondents have marked an answer, ask if various pieces of information in the item affected their response Instrument Design Construct Validity Issues Phases of Pretesting Phase 1: Review by knowledgeable colleagues and analysts Have I included all of the necessary questions? Can I eliminate some of the questions? Did I use categories that will allow me to compare responses to census data or results of other surveys? What are the merits of modernizing categories versus keeping categories as they have been used for past studies? Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method, 2nd Ed. New York: John Wiley & Sons. 41
42 Instrument Design Construct Validity Issues Phases of Pretesting Phase 2: Review by potential respondents to evaluate cognitive and motivational components Are all of the words understood? Do respondents have the information to answer the question? Does the respondent really know? Can the respondent remember? Will the questions be interpreted similarly by all respondents? Instrument Design Construct Validity Issues Phases of Pretesting Phase 2: Review by potential respondents (cont.) Will the respondents be willing to answer? Will respondents truthfully provide correct information? Place sensitive questions at the end of the survey Place the respondent at ease by suggesting that the behavior is common Is the question necessary? Does each question have an answer that can be marked by every respondent? 42
43 Instrument Design Construct Validity Issues Phases of Pretesting Phase 2: Review by potential respondents (cont.) Does the question lead the respondent to answer in a certain way? Is each respondent likely to read and answer each question? Does the mailing package (envelope, cover letter, and questionnaire) create a positive impression? Instrument Design Construct Validity Issues Phases of Pretesting Phase 3: Conduct a small pilot study Have I constructed the response categories for scalar questions so people distribute themselves across categories rather than being concentrated in only one or two of them? Do items from which I hope to build a scale correlate in a way that will allow me to build the scale? What kind of response rate is the survey likely to obtain? Are some questions generating high nonresponse rate? 43
44 Instrument Design Construct Validity Issues Phases of Pretesting Phase 3: Conduct a small pilot study (cont.) Do some variables correlate so highly that for all practical purposes I can eliminate one or more of them? Is useful information being obtained from open ended questions? Are entire pages or sections of the questionnaire being skipped? Instrument Design Construct Validity Issues Phases of Pretesting Phase 4: A final check Present to a few people unfamiliar with the questionnaire for a final look over 44
45 Instrument Design Construct Validity Issues Principles to Follow in Layout Design Place general questions before specific questions Some researchers suggest that the answer to a general question may be influenced by previous specific questions Others suggest random ordering of items to evaluate the existence of order effects; the placement of items would then be based on whether or not order effects are found Place sensitive items near the end of survey Sensitive questions may provoke embarrassment or resentment, resulting in non response Begin with emotionally neutral questions to warm up the respondents Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method, 2nd Ed. New York: John Wiley & Sons. Instrument Design Construct Validity Issues Principles to Follow in Layout Design Questions and corresponding response categories should never be broken up between pages Ensure enough space between items for response (Use white space as divider between items, rather than dividing lines) Provide square boxes for marking answers Square boxes should be no smaller than 1/8 x 1/8 Leave as much space between boxes as within them Do not print items on the front or back of cover pages 45
46 Instrument Design Construct Validity Issues Principles to Follow in Layout Design Format items vertically, rather than horizontally Keep the length of the options fairly consistent If using skip patterns, provide clear instructions Respondent should be clear on when and how to utilize skips Instructions may be provided in parenthesis next to the relevant response choice, or arrows may be used in self administered surveys to indicate response flow Patterns should be checked & tested several times to ensure items perform correctly, before using the survey in the field Instrument Design Construct Validity Issues Principles to Follow in Layout Design Surround answer boxes by a black line & print against a colored background field (Encourages making marks within boxes): Background color should provide clear contrast with white boxes, but not so intense that it results in poor contrast with the words in black print Use 20% tints of certain blues or greens Use 80% or even 100% of the full tint of certain yellows Provide plenty of space and no segmentation marks when seeking open ended answers 46
PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity
PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity Measurement & Variables - Initial step is to conceptualize and clarify the concepts embedded in a hypothesis or research question with
More informationChapter 11. Experimental Design: One-Way Independent Samples Design
11-1 Chapter 11. Experimental Design: One-Way Independent Samples Design Advantages and Limitations Comparing Two Groups Comparing t Test to ANOVA Independent Samples t Test Independent Samples ANOVA Comparing
More informationSURVEY RESEARCH. MSc Economic Policy (March 2015) Dr Mark Ward, Department of Sociology
SURVEY RESEARCH MSc Economic Policy (March 2015) Dr Mark Ward, Department of Sociology Overview Who to ask Sampling Survey error How to ask Part I Survey modes Part II Question and questionnaire design
More informationIntroduction to Survey Research. Clement Stone. Professor, Research Methodology.
Clement Stone Professor, Research Methodology Email: cas@pitt.edu 1 Presentation Outline What is survey research and when is it used? Stages of survey research 1. Specifying research questions, target
More informationCHAPTER 8 EXPERIMENTAL DESIGN
CHAPTER 8 1 EXPERIMENTAL DESIGN LEARNING OBJECTIVES 2 Define confounding variable, and describe how confounding variables are related to internal validity Describe the posttest-only design and the pretestposttest
More informationATTITUDE SCALES. Dr. Sudip Chaudhuri. M. Sc., M. Tech., Ph.D. (Sc.) (SINP / Cal), M. Ed. Assistant Professor (Stage-3) / Reader
ATTITUDE SCALES Dr. Sudip Chaudhuri M. Sc., M. Tech., Ph.D. (Sc.) (SINP / Cal), M. Ed. Assistant Professor (Stage-3) / Reader Gandhi Centenary B.T. College, Habra, India, Honorary Researcher, Saha Institute
More informationValidity and Quantitative Research. What is Validity? What is Validity Cont. RCS /16/04
Validity and Quantitative Research RCS 6740 6/16/04 What is Validity? Valid Definition (Dictionary.com): Well grounded; just: a valid objection. Producing the desired results; efficacious: valid methods.
More informationAOTA S EVIDENCE EXCHANGE CRITICALLY APPRAISED PAPER (CAP) GUIDELINES Annual AOTA Conference Poster Submissions Critically Appraised Papers (CAPs) are
AOTA S EVIDENCE EXCHANGE CRITICALLY APPRAISED PAPER (CAP) GUIDELINES Annual AOTA Conference Poster Submissions Critically Appraised Papers (CAPs) are at-a-glance summaries of the methods, findings and
More informationCommunication Research Practice Questions
Communication Research Practice Questions For each of the following questions, select the best answer from the given alternative choices. Additional instructions are given as necessary. Read each question
More informationRESEARCH METHODS. Winfred, research methods, ; rv ; rv
RESEARCH METHODS 1 Research Methods means of discovering truth 2 Research Methods means of discovering truth what is truth? 3 Research Methods means of discovering truth what is truth? Riveda Sandhyavandanam
More informationOverview of the Logic and Language of Psychology Research
CHAPTER W1 Overview of the Logic and Language of Psychology Research Chapter Outline The Traditionally Ideal Research Approach Equivalence of Participants in Experimental and Control Groups Equivalence
More informationSurvey Research. We can learn a lot simply by asking people what we want to know... THE PREVALENCE OF SURVEYS IN COMMUNICATION RESEARCH
Survey Research From surveys we can learn how large groups of people think and act. To trust generalizations made on the basis of surveys, however, the sample must be representative, the response rate
More informationRESEARCH METHODS. Winfred, research methods,
RESEARCH METHODS Winfred, research methods, 04-23-10 1 Research Methods means of discovering truth Winfred, research methods, 04-23-10 2 Research Methods means of discovering truth what is truth? Winfred,
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 10: Introduction to inference (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 17 What is inference? 2 / 17 Where did our data come from? Recall our sample is: Y, the vector
More informationLecture 4: Research Approaches
Lecture 4: Research Approaches Lecture Objectives Theories in research Research design approaches ú Experimental vs. non-experimental ú Cross-sectional and longitudinal ú Descriptive approaches How to
More informationTIPSHEET QUESTION WORDING
TIPSHEET QUESTION WORDING What would the perfect survey look like? All questions would measure the concept they intend to measure. No question would erroneously measure unintended concepts. All respondents
More informationValidity and Reliability. PDF Created with deskpdf PDF Writer - Trial ::
Validity and Reliability PDF Created with deskpdf PDF Writer - Trial :: http://www.docudesk.com Validity Is the translation from concept to operationalization accurately representing the underlying concept.
More informationThe Regression-Discontinuity Design
Page 1 of 10 Home» Design» Quasi-Experimental Design» The Regression-Discontinuity Design The regression-discontinuity design. What a terrible name! In everyday language both parts of the term have connotations
More informationTopic #2. A key criterion in evaluating any test, measure, or piece of research is validity.
ARTHUR SYC 302 (EXERIMENTAL SYCHOLOGY) 18C LECTURE NOTES [08/23/18] RESEARCH VALIDITY AGE 1 Topic #2 RESEARCH VALIDITY A key criterion in evaluating any test, measure, or piece of research is validity.
More informationEmpowered by Psychometrics The Fundamentals of Psychometrics. Jim Wollack University of Wisconsin Madison
Empowered by Psychometrics The Fundamentals of Psychometrics Jim Wollack University of Wisconsin Madison Psycho-what? Psychometrics is the field of study concerned with the measurement of mental and psychological
More informationM2. Positivist Methods
M2. Positivist Methods While different research methods can t simply be attributed to different research methodologies - the Interpretivists would never touch a questionnaire approach - some methods are
More informationStructural Approach to Bias in Meta-analyses
Original Article Received 26 July 2011, Revised 22 November 2011, Accepted 12 December 2011 Published online 2 February 2012 in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/jrsm.52 Structural
More informationIn this chapter we discuss validity issues for quantitative research and for qualitative research.
Chapter 8 Validity of Research Results (Reminder: Don t forget to utilize the concept maps and study questions as you study this and the other chapters.) In this chapter we discuss validity issues for
More informationReliability and Validity
Reliability and Today s Objectives Understand the difference between reliability and validity Understand how to develop valid indicators of a concept Reliability and Reliability How accurate or consistent
More information26:010:557 / 26:620:557 Social Science Research Methods
26:010:557 / 26:620:557 Social Science Research Methods Dr. Peter R. Gillett Associate Professor Department of Accounting & Information Systems Rutgers Business School Newark & New Brunswick 1 Overview
More informationInterviewing, Structured and Unstructured
Interviewing, Structured and Unstructured Department of Government London School of Economics and Political Science 1 Participant Observation 2 Questionnaire Methods Recall-type Questions Evaluative Questions
More informationVALIDITY OF QUANTITATIVE RESEARCH
Validity 1 VALIDITY OF QUANTITATIVE RESEARCH Recall the basic aim of science is to explain natural phenomena. Such explanations are called theories (Kerlinger, 1986, p. 8). Theories have varying degrees
More informationChapter 6. Methods of Measuring Behavior Pearson Prentice Hall, Salkind. 1
Chapter 6 Methods of Measuring Behavior 2009 Pearson Prentice Hall, Salkind. 1 CHAPTER OVERVIEW Tests and Their Development Types of Tests Observational Techniques Questionnaires 2009 Pearson Prentice
More informationMEASUREMENT, SCALING AND SAMPLING. Variables
MEASUREMENT, SCALING AND SAMPLING Variables Variables can be explained in different ways: Variable simply denotes a characteristic, item, or the dimensions of the concept that increases or decreases over
More informationCulture & Survey Measurement. Timothy Johnson Survey Research Laboratory University of Illinois at Chicago
Culture & Survey Measurement Timothy Johnson Survey Research Laboratory University of Illinois at Chicago What is culture? It is the collective programming of the mind which distinguishes the members of
More informationI. Methods of Sociology Because sociology is a science as well as being a theoretical discipline, it is important to know the ways in which
I. Methods of Sociology Because sociology is a science as well as being a theoretical discipline, it is important to know the ways in which sociologists study society scientifically when they do research
More informationResearch Questions and Survey Development
Research Questions and Survey Development R. Eric Heidel, PhD Associate Professor of Biostatistics Department of Surgery University of Tennessee Graduate School of Medicine Research Questions 1 Research
More informationStrategic Sampling for Better Research Practice Survey Peer Network November 18, 2011
Strategic Sampling for Better Research Practice Survey Peer Network November 18, 2011 Thomas Lindsay, CLA Survey Services John Kellogg, Office of Institutional Research Tom Dohm, Office for Measurement
More informationINTRODUCTION TO QUESTIONNAIRE DESIGN October 22, 2014 Allyson L. Holbrook
INTRODUCTION TO QUESTIONNAIRE DESIGN October 22, 2014 Allyson L. Holbrook www.srl.uic.edu General information Please hold questions until the end of the presentation Slides available at www.srl.uic.edu/seminars/fall14seminars.htm
More informationConceptual and Practical Issues in Measurement Validity
Conceptual and Practical Issues in Measurement Validity MERMAID Series, December 10, 2010 Galen E. Switzer, PhD Clinical and Translational Science Institute VA Center for Health Equity Research and Promotion
More informationSurvey Methods in Relationship Research
Purdue University Purdue e-pubs Department of Psychological Sciences Faculty Publications Department of Psychological Sciences 1-1-2009 Survey Methods in Relationship Research Christopher Agnew Purdue
More informationLearning Objectives. Learning Objectives 17/03/2016. Chapter 4 Perspectives on Consumer Behavior
Chapter 4 Perspectives on Consumer Behavior Copyright 2015 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill Education. Learning
More informationMN 400: Research Methods. PART II The Design of Research
MN 400: Research Methods PART II The Design of Research 1 MN 400: Research Methods CHAPTER 6 Research Design 2 What is Research Design? A plan for selecting the sources and types of information used to
More informationSURVEYS IN TEST & EVALUATION
SURVEYS IN TEST & EVALUATION REBECCA A. GRIER, PH.D. Surveys are a common tool in operational test and evaluation (OT&E). Yet, their full value as a tool in assessing effectiveness and suitability is often
More informationCONCEPT LEARNING WITH DIFFERING SEQUENCES OF INSTANCES
Journal of Experimental Vol. 51, No. 4, 1956 Psychology CONCEPT LEARNING WITH DIFFERING SEQUENCES OF INSTANCES KENNETH H. KURTZ AND CARL I. HOVLAND Under conditions where several concepts are learned concurrently
More informationCHAPTER 3. Research Methodology
CHAPTER 3 Research Methodology The research studies the youth s attitude towards Thai cuisine in Dongguan City, China in 2013. Researcher has selected survey methodology by operating under procedures as
More informationSURVEY OF MAMMOGRAPHY PRACTICE
Study ID SURVEY OF MAMMOGRAPHY PRACTICE This survey takes about 10-15 minutes to complete. We realize how busy you are and greatly appreciate your time. Your input can help make a difference in mammography.
More informationAOTA S EVIDENCE EXCHANGE GUIDELINES TO CRITICALLY APPRAISED PAPER (CAP) WORKSHEET
AOTA S EVIDENCE EXCHANGE GUIDELINES TO CRITICALLY APPRAISED PAPER (CAP) WORKSHEET Sign in the OASIS Submission Site to view the CAP Worksheet. Critically Appraised Papers (CAPs) are at-a-glance summaries
More informationAbstract Title Page Not included in page count. Authors and Affiliations: Joe McIntyre, Harvard Graduate School of Education
Abstract Title Page Not included in page count. Title: Detecting anchoring-and-adjusting in survey scales Authors and Affiliations: Joe McIntyre, Harvard Graduate School of Education SREE Spring 2014 Conference
More informationEmpirical Research Methods for Human-Computer Interaction. I. Scott MacKenzie Steven J. Castellucci
Empirical Research Methods for Human-Computer Interaction I. Scott MacKenzie Steven J. Castellucci 1 Topics The what, why, and how of empirical research Group participation in a real experiment Observations
More informationIssues in Clinical Measurement
Issues in Clinical Measurement MERMAID Series January 15, 2016 Galen E. Switzer, PhD Clinical and Translational Science Institute University of Pittsburgh What is Measurement? observation of people, clinical
More informationBackground. Increasing Response Rates to Postal Questionnaires. Non-response. Loss of power. Example of Bias. The Roberts Study
Increasing Response Rates to Postal Questionnaires Background Many, if not most, trials use postal questionnaires to collect outcome data on participants. Non-response to postal questionnaires can be a
More informationQuestionnaire Design Clinic
Questionnaire Design Clinic Spring 2008 Seminar Series University of Illinois www.srl.uic.edu 1 Cognitive Steps in Answering Questions 1. Understand question. 2. Search memory for information. 3. Integrate
More informationLecture 9 Internal Validity
Lecture 9 Internal Validity Objectives Internal Validity Threats to Internal Validity Causality Bayesian Networks Internal validity The extent to which the hypothesized relationship between 2 or more variables
More informationCP316 Microprocessor Sysytems and Interfacing Evaluation Results Wilfrid Laurier University
CP316 Microprocessor Sysytems and Interfacing Terry Sturtevant This evaluation for the purpose of evaluating my teaching methods and your impressions of the course. It is anonymous and you can omit any
More informationEXPERIMENTAL RESEARCH DESIGNS
ARTHUR PSYC 204 (EXPERIMENTAL PSYCHOLOGY) 14A LECTURE NOTES [02/28/14] EXPERIMENTAL RESEARCH DESIGNS PAGE 1 Topic #5 EXPERIMENTAL RESEARCH DESIGNS As a strict technical definition, an experiment is a study
More informationAuthor s response to reviews
Author s response to reviews Title: The validity of a professional competence tool for physiotherapy students in simulationbased clinical education: a Rasch analysis Authors: Belinda Judd (belinda.judd@sydney.edu.au)
More informationSelf Report Measures
Self Report Measures I. Measuring Self-Report Variables A. The Survey Research Method - Many participants are randomly (e.g. stratified random sampling) or otherwise (nonprobablility sampling) p.140-146
More informationSchizophrenia and Approaches to Recovery
Schizophrenia and Approaches to Recovery neurosciencecme Snack December 13, 2012 December 13, 2013 Supported by an educational grant from Janssen Pharmaceuticals, Inc., administered by Janssen Scientific
More informationPsych 1Chapter 2 Overview
Psych 1Chapter 2 Overview After studying this chapter, you should be able to answer the following questions: 1) What are five characteristics of an ideal scientist? 2) What are the defining elements of
More information1.1 Goals and Learning Objectives. 1.2 Basic Principles. 1.3 Criteria for Good Measurement. Goals and Learning Objectives
1 1.1 Goals and Learning Objectives Goals and Learning Objectives Goals of this chapter: Be aware of potential sources for bias in survey research Identify survey questions & variables needed to answer
More informationRegression Discontinuity Analysis
Regression Discontinuity Analysis A researcher wants to determine whether tutoring underachieving middle school students improves their math grades. Another wonders whether providing financial aid to low-income
More informationName: Class: Date: 1. Use Scenario 4-6. Explain why this is an experiment and not an observational study.
Name: Class: Date: Chapter 4 Review Short Answer Scenario 4-6 Read the following brief article about aspirin and alcohol. Aspirin may enhance impairment by alcohol Aspirin, a long time antidote for the
More informationDESIGN TYPE AND LEVEL OF EVIDENCE: Randomized controlled trial, Level I
CRITICALLY APPRAISED PAPER (CAP) Hasan, A. A., Callaghan, P., & Lymn, J. S. (2015). Evaluation of the impact of a psychoeducational intervention for people diagnosed with schizophrenia and their primary
More informationMEMO. Representatives of all respondent groups see a primary impact of the DVCM position being increased offender accountability.
MEMO TO: GREENBOOK EXECUTIVE COMMITTEE FROM; TERRY SCHWARTZ RE: DVCM SURVEY RESULTS DATE: APRIL 28, 2006 This memo reports the results of surveys assessing the utilization and impact of the Domestic Violence
More informationCompletely randomized designs, Factors, Factorials, and Blocking
Completely randomized designs, Factors, Factorials, and Blocking STAT:5201 Week 2: Lecture 1 1 / 35 Completely Randomized Design (CRD) Simplest design set-up Treatments are randomly assigned to EUs Easiest
More informationCHAPTER VI RESEARCH METHODOLOGY
CHAPTER VI RESEARCH METHODOLOGY 6.1 Research Design Research is an organized, systematic, data based, critical, objective, scientific inquiry or investigation into a specific problem, undertaken with the
More informationUnderstanding and serving users
Understanding and serving users Surveys, interviews and protocols Intro to data analysis Asking users questions: Questionnaires are probably the most frequently used method of summative evaluation of user
More informationReliability and Validity checks S-005
Reliability and Validity checks S-005 Checking on reliability of the data we collect Compare over time (test-retest) Item analysis Internal consistency Inter-rater agreement Compare over time Test-Retest
More informationChapter 19. Confidence Intervals for Proportions. Copyright 2010 Pearson Education, Inc.
Chapter 19 Confidence Intervals for Proportions Copyright 2010 Pearson Education, Inc. Standard Error Both of the sampling distributions we ve looked at are Normal. For proportions For means SD pˆ pq n
More informationDeveloping and Testing Survey Items
Developing and Testing Survey Items William Riley, Ph.D. Chief, Science of Research and Technology Branch National Cancer Institute With Thanks to Gordon Willis Contributions to Self-Report Errors Self-report
More informationPersonality Traits Effects on Job Satisfaction: The Role of Goal Commitment
Marshall University Marshall Digital Scholar Management Faculty Research Management, Marketing and MIS Fall 11-14-2009 Personality Traits Effects on Job Satisfaction: The Role of Goal Commitment Wai Kwan
More informationWhy do Psychologists Perform Research?
PSY 102 1 PSY 102 Understanding and Thinking Critically About Psychological Research Thinking critically about research means knowing the right questions to ask to assess the validity or accuracy of a
More informationMethodological Issues in Measuring the Development of Character
Methodological Issues in Measuring the Development of Character Noel A. Card Department of Human Development and Family Studies College of Liberal Arts and Sciences Supported by a grant from the John Templeton
More informationYou can t fix by analysis what you bungled by design. Fancy analysis can t fix a poorly designed study.
You can t fix by analysis what you bungled by design. Light, Singer and Willett Or, not as catchy but perhaps more accurate: Fancy analysis can t fix a poorly designed study. Producing Data The Role of
More informationCHAPTER 5: PRODUCING DATA
CHAPTER 5: PRODUCING DATA 5.1: Designing Samples Exploratory data analysis seeks to what data say by using: These conclusions apply only to the we examine. To answer questions about some of individuals
More informationPEER REVIEW HISTORY ARTICLE DETAILS VERSION 1 - REVIEW. Ball State University
PEER REVIEW HISTORY BMJ Open publishes all reviews undertaken for accepted manuscripts. Reviewers are asked to complete a checklist review form (see an example) and are provided with free text boxes to
More informationDOING SOCIOLOGICAL RESEARCH C H A P T E R 3
DOING SOCIOLOGICAL RESEARCH C H A P T E R 3 THE RESEARCH PROCESS There are various methods that sociologists use to do research. All involve rigorous observation and careful analysis These methods include:
More informationResponse to the proposed advice for health and social care practitioners involved in looking after people in the last days of life
Response to the proposed advice for health and social care practitioners involved in looking after people in the last days of life Introduction i. Few conditions are as devastating as motor neurone disease
More informationSESSION # 3 CULTURAL AWARENESS
SESSION # 3 CULTURAL AWARENESS Orientation Session Overview Session Objectives Participants Format Duration Group Size Minimum Staffing Materials Needed Pre-Arrival This session is designed to help host
More informationPurpose. Study Designs. Objectives. Observational Studies. Analytic Studies
Purpose Study Designs H.S. Teitelbaum, DO, PhD, MPH, FAOCOPM AOCOPM Annual Meeting Introduce notions of study design Clarify common terminology used with description and interpretation of information collected
More informationAuthor's response to reviews
Author's response to reviews Title: Diabetes duration and health-related quality of life in individuals with onset of diabetes in the age group 15-34 years - a Swedish population-based study using EQ-5D
More information"Homegrown" Exercises around M&M Chapter 6-1- Help a journalist to be "statistically correct" age-related prevalence, and conflicting evidence exists in favor of the mortality hypothesis. We compared mortality
More informationGCSE PSYCHOLOGY UNIT 2 FURTHER RESEARCH METHODS
GCSE PSYCHOLOGY UNIT 2 FURTHER RESEARCH METHODS GCSE PSYCHOLOGY UNIT 2 SURVEYS SURVEYS SURVEY = is a method used for collecting information from a large number of people by asking them questions, either
More informationThe Impact of Item Sequence Order on Local Item Dependence: An Item Response Theory Perspective
Vol. 9, Issue 5, 2016 The Impact of Item Sequence Order on Local Item Dependence: An Item Response Theory Perspective Kenneth D. Royal 1 Survey Practice 10.29115/SP-2016-0027 Sep 01, 2016 Tags: bias, item
More informationSurvey and Questionnaire Design
Survey and Questionnaire Design Fraser Health Authority, 2017 The Fraser Health Authority ( FH ) authorizes the use, reproduction and/or modification of this publication for purposes other than commercial
More informationMeasuring Attitudes. Measurement and Theory of Democratic Attitudes. Introduction Measurement Summary
Measuring Attitudes and Theory of Democratic Attitudes What are we looking for? What can we expect? Recap: Zaller s RAS-Model Real People might generate attitudes on the fly Based on political information
More informationA) I only B) II only C) III only D) II and III only E) I, II, and III
AP Statistics Review Chapters 13, 3, 4 Your Name: Per: MULTIPLE CHOICE. Write the letter corresponding to the best answer. 1.* The Physicians Health Study, a large medical experiment involving 22,000 male
More informationLikert Scaling: A how to do it guide As quoted from
Likert Scaling: A how to do it guide As quoted from www.drweedman.com/likert.doc Likert scaling is a process which relies heavily on computer processing of results and as a consequence is my favorite method
More informationA Lesson Plan from Rights, Respect, Responsibility: A K-12 Curriculum
Birth Control Basics A Lesson Plan from Rights, Respect, Responsibility: A K-12 Curriculum Fostering responsibility by respecting young people s rights to honest sexuality education. NSES ALIGNMENT: By
More informationUNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN BOOKSTACKS
UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN BOOKSTACKS Digitized by the Internet Archive in 2012 with funding from University of Illinois Urbana-Champaign http://www.archive.org/details/interactionofint131cald
More informationChapter 5: Producing Data
Chapter 5: Producing Data Key Vocabulary: observational study vs. experiment confounded variables population vs. sample sampling vs. census sample design voluntary response sampling convenience sampling
More informationCitizens Jury Questionnaire Results
Citizens Jury Questionnaire Results Jury 1 Prepared by Dr Sarah Clement ABSTRACT On 14-16 and 21-23 January 2016, two three-day citizens juries took place in Manchester, tackling policy questions related
More informationCHAPTER III RESEARCH METHODOLOGY
CHAPTER III RESEARCH METHODOLOGY In this chapter, the researcher will elaborate the methodology of the measurements. This chapter emphasize about the research methodology, data source, population and sampling,
More informationTHE EMERGE SURVEY ON TAKING PART IN BIOBANK RESEARCH: VERSION A
THE EMERGE SURVEY ON TAKING PART IN BIOBANK RESEARCH: VERSION A What is this survey about? This survey is about your views on taking part in medical research. We want to understand what you think about
More informationItem Analysis for Beginners
Item Analysis for Beginners John Kleeman Questionmark Executive Director and Founder All rights reserved. Questionmark is a registered trademark of Questionmark Computing Limited. All other trademarks
More informationAppraisal tool for Cross-Sectional Studies (AXIS)
Appraisal tool for Cross-Sectional Studies (AXIS) Critical appraisal (CA) is used to systematically assess research papers and to judge the reliability of the study being presented in the paper. CA also
More informationItem Writing Guide for the National Board for Certification of Hospice and Palliative Nurses
Item Writing Guide for the National Board for Certification of Hospice and Palliative Nurses Presented by Applied Measurement Professionals, Inc. Copyright 2011 by Applied Measurement Professionals, Inc.
More informationOther Designs. What Aids Our Causal Arguments? Time Designs Cohort. Time Designs Cross-Sectional
Statistics in & Things to Consider Your Proposal Bring green folder of readings and copies of your proposal rough draft (one to turn in) Other Designs November 26, 2007 1 2 What Aids Our Causal Arguments?
More informationalternate-form reliability The degree to which two or more versions of the same test correlate with one another. In clinical studies in which a given function is going to be tested more than once over
More informationMeasuring impact. William Parienté UC Louvain J PAL Europe. povertyactionlab.org
Measuring impact William Parienté UC Louvain J PAL Europe povertyactionlab.org Course overview 1. What is evaluation? 2. Measuring impact 3. Why randomize? 4. How to randomize 5. Sampling and Sample Size
More informationUNIT 3 & 4 PSYCHOLOGY RESEARCH METHODS TOOLKIT
UNIT 3 & 4 PSYCHOLOGY RESEARCH METHODS TOOLKIT Prepared by Lucie Young, Carey Baptist Grammar School lucie.young@carey.com.au Credit to Kristy Kendall VCE Psychology research methods workbook for some
More informationBIRKMAN REPORT THIS REPORT WAS PREPARED FOR: JOHN Q. PUBLIC (D00112) ANDREW DEMO (G526VC) DATE PRINTED February
BIRKMAN REPORT THIS REPORT WAS PREPARED FOR: JOHN Q. PUBLIC (D00112) ANDREW DEMO (G526VC) DATE PRINTED February 28 2018 Most of what we hear is an opinion, not a fact. Most of what we see is a perspective,
More informationSurvey Research Methodology
Survey Research Methodology Prepared by: Praveen Sapkota IAAS, TU, Rampur Chitwan, Nepal Social research Meaning of social research A social research is a systematic method of exploring, analyzing and
More informationThe Power to Change Your Life: Ten Keys to Resilient Living Robert Brooks, Ph.D.
The Power to Change Your Life: Ten Keys to Resilient Living Robert Brooks, Ph.D. The latest book I co-authored with my colleague Dr. Sam Goldstein was recently released. In contrast to our previous works
More information