Research Questions and Survey Development

Similar documents
PTHP 7101 Research 1 Chapter Assignments

Importance of Good Measurement

Research Approach & Design. Awatif Alam MBBS, Msc (Toronto),ABCM Professor Community Medicine Vice Provost Girls Section

Validity and reliability of measurements

Validity and reliability of measurements

Validity and Reliability. PDF Created with deskpdf PDF Writer - Trial ::

ADMS Sampling Technique and Survey Studies

Psychometric Instrument Development

Variables in Research. What We Will Cover in This Section. What Does Variable Mean?

PÄIVI KARHU THE THEORY OF MEASUREMENT

ADVANCE Final Report (2016)

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

Variables in Research. What We Will Cover in This Section. What Does Variable Mean? Any object or event that can take on more than one form or value.

Psychometric Instrument Development

Psychometric Instrument Development

Research methods. Summary of the book Research methods for operations management Christer Karlsson. Sharmaine Linker

MEASURING PATIENT AND OBSERVER-REPORTED OUTCOMES (PROS AND OBSROS) IN RARE DISEASE CLINICAL TRIALS - EMERGING GOOD PRACTICES TASK FORCE

Empowered by Psychometrics The Fundamentals of Psychometrics. Jim Wollack University of Wisconsin Madison

Gezinskenmerken: De constructie van de Vragenlijst Gezinskenmerken (VGK) Klijn, W.J.L.

ESTABLISHING VALIDITY AND RELIABILITY OF ACHIEVEMENT TEST IN BIOLOGY FOR STD. IX STUDENTS

Lecture 4: Research Approaches

SEMINAR ON SERVICE MARKETING

HOW TO WRITE A STUDY PROTOCOL

Final Exam: PSYC 300. Multiple Choice Items (1 point each)

Use of the Quantitative-Methods Approach in Scientific Inquiry. Du Feng, Ph.D. Professor School of Nursing University of Nevada, Las Vegas

LANGUAGE TEST RELIABILITY On defining reliability Sources of unreliability Methods of estimating reliability Standard error of measurement Factors

Variables in Research. What We Will Cover in This Section. What Does Variable Mean? Any object or event that can take on more than one form or value.

HPS301 Exam Notes- Contents

The Current State of Our Education

Department of Psychological Sciences Learning Goals and Outcomes

Webinar 3 Systematic Literature Review: What you Need to Know

Reliability and Validity checks S-005

Measurement is the process of observing and recording the observations. Two important issues:

Text-based Document. Psychometric Evaluation of the Diabetes Self-Management Instrument-Short Form (DSMI-20) Lin, Chiu-Chu; Lee, Chia-Lun

Personality Traits Effects on Job Satisfaction: The Role of Goal Commitment

A CONSTRUCT VALIDITY ANALYSIS OF THE WORK PERCEPTIONS PROFILE DATA DECEMBER 4, 2014

Introduction to the Scientific Method. Knowledge and Methods. Methods for gathering knowledge. method of obstinacy


Lecture Week 3 Quality of Measurement Instruments; Introduction SPSS

Reliability AND Validity. Fact checking your instrument

Sample Exam Questions Psychology 3201 Exam 1

CHECKLIST FOR EVALUATING A RESEARCH REPORT Provided by Dr. Blevins

VARIABLES AND MEASUREMENT

CHAPTER VI RESEARCH METHODOLOGY

Conducting Strong Quasi-experiments

The Predictive Validity of the Test of Infant Motor Performance on School Age Motor Developmental Delay

Goal #1: To train students who are thoroughly grounded in the science of psychology and its application to health and disease.

Introductory: Coding

Research Design. Source: John W. Creswell RESEARCH DESIGN. Qualitative, Quantitative, and Mixed Methods Approaches Third Edition

Methodological Issues in Measuring the Development of Character

Handout 5: Establishing the Validity of a Survey Instrument

Research in Real-World Settings: PCORI s Model for Comparative Clinical Effectiveness Research

Epidemiologic Methods I & II Epidem 201AB Winter & Spring 2002

Experimental Methods. Anna Fahlgren, Phd Associate professor in Experimental Orthopaedics

Validation of Scales

Conducting Research in the Social Sciences. Rick Balkin, Ph.D., LPC-S, NCC

How do we identify a good healthcare provider? - Patient Characteristics - Clinical Expertise - Current best research evidence

THE DEPARTMENT OF VETERANS AFFAIRS HEALTH SERVICES RESEARCH & DEVELOPMENT SERVICE 810 Vermont Avenue Washington, DC 20420

Communication Research Practice Questions

Psych 1Chapter 2 Overview

Assessing Cultural Competency from the Patient s Perspective: The CAHPS Cultural Competency (CC) Item Set

Running head: CPPS REVIEW 1

By Hui Bian Office for Faculty Excellence

Clinical Epidemiology for the uninitiated

investigate. educate. inform.

Other Designs. What Aids Our Causal Arguments? Time Designs Cohort. Time Designs Cross-Sectional

Emotional Intelligence Assessment Technical Report

CONSORT 2010 checklist of information to include when reporting a randomised trial*

32.5. percent of U.S. manufacturers experiencing unfair currency manipulation in the trade practices of other countries.

Basic concepts and principles of classical test theory

Psychometric Instrument Development

Appraising the Literature Overview of Study Designs

Measurement and Descriptive Statistics. Katie Rommel-Esham Education 604

Protocol Development: The Guiding Light of Any Clinical Study

GATE CAT Diagnostic Test Accuracy Studies

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity

Developing and Testing Hypotheses Kuba Glazek, Ph.D. Methodology Expert National Center for Academic and Dissertation Excellence Los Angeles

VERDIN MANUSCRIPT REVIEW HISTORY REVISION NOTES FROM AUTHORS (ROUND 2)

Steps in establishing reliability and validity of need assessment questionnaire on life skill training for adolescents

ALABAMA SELF-ASSESSMENT INDEX PILOT PROGRAM SUMMARY REPORT

TRANSLATING RESEARCH INTO PRACTICE

STA630 Research Methods Solved MCQs By

Evidence Based Practice. Objectives. Overarching Goal. What it is. Barriers to EBP 4/11/2016

Statistics for Psychosocial Research Session 1: September 1 Bill

Factor Analysis of Gulf War Illness: What Does It Add to Our Understanding of Possible Health Effects of Deployment?

Ch. 11 Measurement. Measurement

Internal structure evidence of validity

Research Design & Protocol Development

11-3. Learning Objectives

Reliability. Internal Reliability

FDA SUMMARY OF SAFETY AND EFFECTIVENESS DATA (SSED) CLINICAL SECTION CHECKLIST OFFICE OF DEVICE EVALUATION

Formulation of Research Design

Technical Specifications

Analysis A step in the research process that involves describing and then making inferences based on a set of data.

Making sense of published research Two ways: Format Argument

Doctoral Dissertation Boot Camp Quantitative Methods Kamiar Kouzekanani, PhD January 27, The Scientific Method of Problem Solving

Associate Prof. Dr Anne Yee. Dr Mahmoud Danaee

Psychometric properties of the Chinese quality of life instrument (HK version) in Chinese and Western medicine primary care settings

Types of Biomedical Research

Transcription:

Research Questions and Survey Development R. Eric Heidel, PhD Associate Professor of Biostatistics Department of Surgery University of Tennessee Graduate School of Medicine Research Questions 1

Research Question Foundation of everything empirical 80% of the statistical consultant s time Directly influences: Hypotheses Research Design Population and Sampling Variables Sample Size/Statistical Power/Effect Size Database Structure and Management Statistical Analysis Publication, Presentation, and Dissemination of Findings Research Question Turn clinical problems into research questions Answerable, attainable, and deductive Where to look for research questions? YOUR clinical practice YOUR patient population Literature Peers, colleagues, and mentors Treatments, therapy, prognosis, and prevention 2

Feasible Interesting Novel Ethical Relevant Time Resources Expertise Funding Clinical Environment Educational Environment Simulation Environment Feasible 3

Interesting Intrinsic motivation Personally rewarding Professionally rewarding Peer and colleagues Funding institutions (NIH, NSF, etc.) Serves an immediate clinical or educational need Nature of training in clinic vs. simulation Novel New evidence Gap in the literature Replication of published studies Addition of confounders More precise and accurate outcomes Subgroups and disparities Methods of training Meeting guidelines 4

Ethical Institutional Review Board (IRB) Consequences of testing performance in simulation Effects of treatment and non treatment Relevant Influence clinical practice Improve upon clinical outcomes Increase training capacity with new educational interventions Development and testing of curriculum Quality improvement Change standard of care Meaningful contribution to the literature 5

Population Intervention Comparator Outcome PICO Population Inclusion criteria Exclusion criteria Intervention Independent variable first level Grouping variable Treatment Characteristic Comparator Control group second level Placebo Does not possess characteristic Outcome Scale of measurement Gold standard 6

Population Must be described in order to make proper inferences and generalizations What is the population of interest? Define in regards to inclusion criteria and exclusion criteria What are the inclusion criteria? Characteristics that participants MUST possess to be included in the study Demographic characteristics Clinical characteristics Geographic characteristics Temporal characteristics Objectively defined or deductive What are the exclusion criteria? Characteristics that participants should not have Likelihood of being lost to follow up Create statistical noise due to lack of data High risk of potential adverse effects or comorbidites Vague or inductive Intervention Hypothesis driven studies focus on the effects of an intervention or independent variable on an outcome Retrospective, observational studies Group or treatment variable Independent variable What the researcher manipulates Some association with the outcome Experimental studies Treatment or intervention given to participants Random assignment Must be described vividly 7

Comparator Fundamental and requisite need for comparison group to establish treatment effects or treatment efficacy Retrospective, observational studies Control or comparison group Independent variable Researcher choose comparator group Experimental studies Control or placebo treatment given to participants Random assignment Active or inactive control treatments must be explained Outcome The outcome is what researchers measure for in research Hypothesized relationship between intervention/independent variable and the outcome variable Also known as a dependent variable Gold standard method of measuring the outcome Scale of measurement of the outcome influences: Statistical power Nature of effect size Needed sample size Sample size calculation Choice of statistical test used 8

Intervention Cases Outcome Population Comparator Controls Present Researcher Intervention Comparator Outcome Population 9

Retrospective selection of cohort Intervention Outcome Comparator Population Present Researcher Intervention Outcome Comparator Population Present Researcher 10

Treatment Intervention Outcome Population Present Researcher Placebo Comparator Outcome Yes Outcome No Intervention 107 39 Comparator 17 71 Population 11

Intervention Population Sample Outcome Comparator Intervention Population Sample Outcome Confounding Comparator 12

Secondary Research Questions Scientific practitioners often want to ask numerous questions Secondary, tertiary, and ancillary questions come from the literature and informed/inquisitive thinking based on clinical experience Secondary questions are also associated with measuring for new variables, untested associations, or other outcomes Only ask these questions as needed, Type I errors increase as more questions are asked, all should be hypothesis driven PICO Search Engine National Institutes of Health (NIH) Search MEDLINE and PubMed using PICO framework Excellent tool https://pubmedhh.nlm.nih.gov/nlmd/pico/piconew.php 13

Survey Development Four Reasons for Creating a Survey 1. Establishing Prevalence 2. Comparing Known groups 3. Validating Constructs 4. Assessing Change Across Time 14

The Eight Steps for Creating Surveys 1. Create a Construct Specification 2. Expert Review of the Construct Specification 3. Choose a Survey Methodology 4. Write the Survey Items 5. Pretest the Survey 6. Structure the Survey 7. Survey Pilot Study 8. Survey Validation Study What is the construct? 15

Construct Specification 1. Operational Definition of the Construct 2. Theoretical, Conceptual, or Physiological Framework 3. Population of Interest Inclusion and Exclusion Criteria 4. Recruitment of Participants 5. Resources, Administration, and Time 6. Purpose of the Survey Instrument 7. Conceptually or Theoretically Similar Constructs 8. Specific Outcomes of the Survey Instrument What is the construct? Definition Theory Population Content Similarities Outcomes 16

Construct Specification Components 1. Operationalize Each Content Area 2. Identify Specific Components or Items 3. Describe Components 4. Describe Response Sets 5. Provide Citations 6. Table of Specifications 7. Expert Review 8. Integration of Revisions Phenomenological Perception 1. Time 2. The body 3. Others 4. The world 17

Six Types of Surveys 1. Tests 2. Rating Scales 3. Performance Ratings 4. Checklists 5. Psychological Instruments 6. Inventories Six Parts of Surveys 1. Survey Title 2. Survey Introduction 3. Survey Instructions 4. Survey Items 5. Survey Demographic Items 6. Closing Statement 18

Modes of Survey Administration 1. One on one Interview 2. Group Administration 3. Telephone Administration 4. Postal Mail 5. Electronic Administration Writing Survey Instrument Items Two parts to any survey item Item stem Response set Logical flow between item stem and response set Written to cover only ONE topic or concept Reflect the citations and allocated percentages of construct specification Construct alignment Same tense for all items Write in the positive sense of understanding Write twice as many items as you think you will need 19

0 1 1 2 3 4 5 0 Steps for Pretesting a Survey 1. Select a Small Sample (n = 5 10) 2. Administer the Survey Items 3. Conduct a Focus Group 4. Integrate Feedback 20

Formally Structure a Survey 1. All Six Survey Parts Pretested 2. Professional Presentation 3. Trial Runs 4. Post, Publish, or Print Survey Pilot Study 1. Sample Size (n = 150 300) 2. Statistical Assumptions 3. Principal Component Analysis (PCA) 4. Reliability Analysis 21

Survey Validation Study 1. Sample Size (n = 300 1,000) 2. Statistical Assumptions 3. Confirmatory Factor Analysis (CFA) 4. Validity Analysis Psychometrics Reliability Internal Consistency Cronbach s alpha (α) Split half Kudar Richardson 20 (KR 20 Test Retest Spearman Brown Alternate/Parallel Forms Inter rater Kappa Intraclass Correlation Coefficient Principal Components Analysis (PCA) Validity Construct Validity Concurrent Validity Predictive Validity Content Validity Known groups Validity Convergent Validity Divergent Validity Incremental Validity Face Validity Confirmatory Factor Analysis (CFA) 22

Validity Measures Construct Validity All forms of validity Predictive Validity Ability to predict for future occurrences Concurrent Validity Ability to predict for current events Convergent Validity Correlation with similar constructs Divergent (discriminant) Validity Correlation with dissimilar constructs Known groups Validity Differentiate between existing groups in a population Correlations and between subjects statistics used primarily Precision in Measurement: 1. Reliability 2. Stability 3. Confidence 4. Consistency 23

Accuracy in Measurement: 1. Validity 2. Utility 3. Interpretability 4. Generalizability 24