Decision Making Process

Similar documents
SECONDARY DATA ANALYSIS: Its Uses and Limitations. Aria Kekalih

Vocabulary. Bias. Blinding. Block. Cluster sample

Statistical Sampling: An Overview for Criminal Justice Researchers April 28, 2016

Sampling for Success. Dr. Jim Mirabella President, Mirabella Research Services, Inc. Professor of Research & Statistics

Chapter 5: Producing Data

Probability Models for Sampling

A Probability Puzzler. Statistics, Data and Statistical Thinking. A Probability Puzzler. A Probability Puzzler. Statistics.

aps/stone U0 d14 review d2 teacher notes 9/14/17 obj: review Opener: I have- who has

TIPSHEET QUESTION WORDING

STA 291 Lecture 4 Jan 26, 2010

System Justifying Motives Can Lead to Both the Acceptance and Rejection of Innate. Explanations for Group Differences

Introduction: Statistics, Data and Statistical Thinking Part II

Mixed Methods Study Design

MATH-134. Experimental Design

You can t fix by analysis what you bungled by design. Fancy analysis can t fix a poorly designed study.

Moving From Victim to Victorious. Jessica Moehr Program Manager, Consultant, Trainer

I. Introduction and Data Collection B. Sampling. 1. Bias. In this section Bias Random Sampling Sampling Error

STA Module 9 Confidence Intervals for One Population Mean

I. Survey Methodology

REVIEW FOR THE PREVIOUS LECTURE

My Notebook. A space for your private thoughts.

One week program of activities. Aimed at AS-Level Psychology students. Takes place in July, after AS-Level exams

Frankly Speaking About Cancer Clinical Trials

What Constitutes a Good Contribution to the Literature (Body of Knowledge)?

GUEN DONDÉ HEAD OF RESEARCH INSTITUTE OF BUSINESS ETHICS

Unconscious Knowledge Assessment

1.1 Goals and Learning Objectives. 1.2 Basic Principles. 1.3 Criteria for Good Measurement. Goals and Learning Objectives

Funnelling Used to describe a process of narrowing down of focus within a literature review. So, the writer begins with a broad discussion providing b

Intro to Survey Design and Issues. Sampling methods and tips

Instructions for 4-H Club Self-Assessments

Critical Thinking Assessment at MCC. How are we doing?

INTRODUCTION TO QUESTIONNAIRE DESIGN October 22, 2014 Allyson L. Holbrook

Chapter 23. Inference About Means. Copyright 2010 Pearson Education, Inc.

Measuring impact. William Parienté UC Louvain J PAL Europe. povertyactionlab.org

Analysis of TB prevalence surveys

THE PUBLIC AND GENETIC EDITING, TESTING, AND THERAPY

10/17/2018. Today s Agenda. Definition of Culture (Cultural Competency Framework) Current teaching considers culture as the sum total of:

Inferential Statistics

Chapter 13 Summary Experiments and Observational Studies

Thinking and Intelligence

A. Indicate the best answer to each the following multiple-choice questions (20 points)

Belief behavior Smoking is bad for you I smoke

Survey Errors and Survey Costs

One slide on research question Literature review: structured; holes you will fill in Your research design

Chapter 13. Experiments and Observational Studies. Copyright 2012, 2008, 2005 Pearson Education, Inc.

HOW TO WRITE A STUDY PROTOCOL

"Putting Women First" Ethical and Safety recommendations for Research on Violence against Women

In this chapter we discuss validity issues for quantitative research and for qualitative research.

GE SLO: Ethnic-Multicultural Studies Results

Scanning Brains for Insights on Racial Perception

Introduction. Preliminary POV. Additional Needfinding Results. CS Behavioral Change Studio ASSIGNMENT 2 POVS and Experience Prototypes

STATS8: Introduction to Biostatistics. Overview. Babak Shahbaba Department of Statistics, UCI

Practitioner s Guide To Stratified Random Sampling: Part 1

Name: Class: Date: 1. Use Scenario 4-6. Explain why this is an experiment and not an observational study.

Measuring Attitudes. Measurement and Theory of Democratic Attitudes. Introduction Measurement Summary

0457 GLOBAL PERSPECTIVES

Why do Psychologists Perform Research?

HOW IMPLICIT BIAS CAN MAKE PHILOSOPHY CLASSES UNWELCOMING (AND WHAT WE CAN DO ABOUT IT) Matthew Kopec Ph.D., Northwestern University

Health Care Callback Survey Topline August 2001

Introduction to Program Evaluation

ORIENTATION SAN FRANCISCO STOP SMOKING PROGRAM

GPP 501 Microeconomic Analysis for Public Policy Fall 2017

INTRODUCTION. Evidence standards for justifiable evidence claims, June 2016

Survey Research Methodology

Implicit Attitude. Brian A. Nosek. University of Virginia. Mahzarin R. Banaji. Harvard University

Learn how to more effectively communicate with others. This will be a fun and informative workshop! Sponsored by

Design of Experiments & Introduction to Research

A situation analysis of health services. Rachel Jewkes, MRC Gender & Health Research Unit, Pretoria, South Africa

Variable Data univariate data set bivariate data set multivariate data set categorical qualitative numerical quantitative

Math 140 Introductory Statistics

THEORY TO STRATEGY: DESIGNING PROGRAMS FOR SUCCESS. Kristin Pace, Ph.D. Social Research Scientist

15.301/310, Managerial Psychology Prof. Dan Ariely Recitation 8: T test and ANOVA

Measuring the User Experience

Chapter 8 Estimating with Confidence

Understanding the nature of marker disagreement, uncertainty and error in marking. Caroline Morin Stephen Holmes Beth Black

Steps in establishing reliability and validity of need assessment questionnaire on life skill training for adolescents

2017 Youth Tobacco Survey Methodology Report. Prepared: March 2018

Political Science 15, Winter 2014 Final Review

The Quantitative/Qualitative Debate A False Dichotomy?

Attitude I. Attitude A. A positive or negative evaluation of a concept B. Attitudes tend to be based on 1)...values 2)...beliefs 3)...

Completing a Race IAT increases implicit racial bias

Chapter Eight: Multivariate Analysis

Psychology and Social Change

Intervention vs. Observational Trials:

CHAPTER 3 METHOD AND PROCEDURE

Scientific Thinking Handbook

SURVEY TOPIC INVOLVEMENT AND NONRESPONSE BIAS 1

Strategic Sampling for Better Research Practice Survey Peer Network November 18, 2011

Survey Sampling Weights and Item Response Parameter Estimation

Appraisal tool for Cross-Sectional Studies (AXIS)

INTRODUCTION TO STATISTICS SORANA D. BOLBOACĂ

3: (Semi-)Structured Interviews. Tina FREYBURG [Introduction to Qualitative Methods]

National Survey of Teens and Young Adults on HIV/AIDS

Week 10 Hour 1. Shapiro-Wilks Test (from last time) Cross-Validation. Week 10 Hour 2 Missing Data. Stat 302 Notes. Week 10, Hour 2, Page 1 / 32

Examples of Feedback Comments: How to use them to improve your report writing. Example 1: Compare and contrast

Problems for Chapter 8: Producing Data: Sampling. STAT Fall 2015.

Experimental Research. Types of Group Comparison Research. Types of Group Comparison Research. Stephen E. Brock, Ph.D.

Nonresponse Rates and Nonresponse Bias In Household Surveys

Breaking the Bias Habit. Jennifer Sheridan, Ph.D. Executive & Research Director Women in Science & Engineering Leadership Institute

Michael Norman International &

Transcription:

Survey Says: How to Create High Quality Surveys to Assist in the Evidence Anaheim, California Based Decision Making Process Daniel Byrd, Ph.D. University of California Office of the President CAIR Conference: November 14-16, 2018 Delta Hotels Anaheim Garden Grove Survey Says: How to Create High Quality Surveys to Assist in the Evidence Based Decision Making Process Daniel Byrd, Ph.D. University of California Office of the President CAIR Conference: November 14-16, 2018 Delta Hotels Anaheim Garden Grove Anaheim, California

About IRAP Survey Services 12/6/2018 2

Workshop Schedule 20 min: Survey research best practices 40 min: Developing a good survey (hands on activity) 20 min: Sampling and weighting 10 min: Concluding thoughts

The Survey Planning Process Define Your Goals- What is the purpose of conducting the survey? Who are the decision makers? Who will be surveyed? Conduct background research- Has anyone else done this type of research? What did they find? Do they have questions that you can use? Write survey questions- If needed, write new survey questions, If not, organize the survey using the funnel method and avoid writing biased questions Pilot test survey- It isn t a great idea to just launch a survey without pilot testing. Pilot testing helps the researcher to understand, a) if the survey questions make sense, and b) if the skip logic works. Test with both staff members and members of the population you are surveying. Launch the survey and analyze the data- Lastly, once the survey is debugged and pilot tested, one can launch the survey Sources: Survey Monkey and Qualtrics 12/6/2018 4

Tips for Writing Survey Questions Keep questions short, but not too short- no filler words (e.g. in order to) Only ask one question at a time- How satisfied are you with the cost and quality of your UC education Minimize demand recall- people don t have photographic memories, that data can be messy and hard to interpret Avoid leading questions- You agree that the UC fosters a welcoming environment Avoid jargon and acronyms and other types of complex language- How much did your UC education contribute to your ability to make a coherent argument Make lists exhaustive- Include all possible options in your list, that way you won t have a lot of respondents selecting other Use all positive wording for rating scales-while conventional wisdom states mix up responses to avoid acquiescent and extreme response bias, these are generally not a big concern. Using mixed wording can distort factor structures (for factor analysis) Source: Measuring UX 12/6/2018 5

Social Psychology and Survey Research How surveys are constructed can affect how people respond to items- a response bias Implicit Social Cognition: the majority of human cognition (mental action) occurs outside of one s awareness Priming individuals to think about certain constructs can affect how they respond to a survey. Affects the reliability and generalizability of results Classic types of biases Assimilation effects- judgements and contextual information are positively correlated Contrast effects: targets primed negatively are evaluated more positively Greenwald and Banaji (1995) Glaser and Banaji (1999) Kobylinski (2014)

Additional Psychological Factors of Importance to Survey Researchers Problem: Recall questions, in particular, are cognitively taxing, the quality of data can decrease as the number of these questions increase Solution: Randomize items when possible and limit the number of recall questions Problem: A lot of research suggests people have a tendency to endorse the status quo (or say I don t know), especially when asked multiple cognitively taxing items Solution: Limit the number of cognitively taxing items (wordy questions and recall questions) on your survey and randomize Sources: Krosnick, J.A. (1991) Response Strategies for Coping with Cognitive Demands of Attitude Measures in Surveys. Applied Cognitive Psychology,(5)-213-236. Krosnick, J.A. (1999). Survey Research. Annual Review of Psychology (50).537-567 Jost, J., Banaji,M. & Nosek, B. A Decade of System Justification Theory: Accumulated Evidence of Conscious and Unconscious Bolstering of the Status Quo. Political Psychology, (25) 861-919. 12/6/2018 8

Response Bias: BEWARE Question Order Matters : Problem: The order in which items are presented can affect how people respond to items Solution: randomize order blocks and items within blocks when possible. But still have a logical flow to your questionnaire. Similar questions should be grouped together. Ask demographics last: Asking demographic questions can trigger schemas which can affect responses (e.g., stereotype threat). Asking them last reduces this bias Beware of social desirability: Asking sensitive items can lead people to respond in socially desirable ways, that often do not reflect their attitudes when measured other ways (e.g., questions about cheating and racial attitudes) Interpret this data with caution and justify the need for asking these items Sources: Pew Research Krosnick, J.A. (1999). Survey Research. Annual Review of Psychology (50).537-567 Greenwald, A.G., Poehlman, A.T., Uhlmann, E.L. & Banaju, M.R. (2009). Understanding and Using the Implicit Association Test: III. Meta- Analysis of Predictive Validity. Journal of Social and Personality Psychology (97) 17-41 12/6/2018 9

Workshop Developing a Good Survey Hands on Activity

Activity Agenda 20min Small Group Activity Break into small groups Review the survey instrument Decide how the questions should be ordered Re-write problematic questions (be prepared to discuss your strategy) 20 min High Level Discussion What were the most problematic aspects of the items? What was your strategy for revising the items, and the order in which they were presented?

Survey Sampling and Weighting Survey Sampling and Weighting

Census or Sampling Census Sample Pro: true measure of the population Pro: able to analyze data from sub-groups Con: Takes a long time to run these surveys Con: these surveys can be more expensive Pro: Able to get results faster Pro: If sampling and weighting are done properly, one can make inferences to the general population Con: hard to make inferences about sub populations Con: if sampling is done wrong, it can affect the generalizability of the data Source: Australia Bureau of Statistics 12/6/2018 13

Sampling Types Sampling Method Simple Random Sampling Definition Pros Cons Every member of the population has an equal chance of being sampled You don t need a large sample size. Use Survey Monkey s online calculator It s hard and almost impossible to analyze data from subpopulations Cluster sampling Not done as much in higher education, occurs when researchers use a multistage sampling approach. They randomly select geographic areas and then households Method can be helpful when trying to sample smaller populations (e.g., Blacks) Weighting has to be done to make sure the findings are representative of the population. Stratified Sampling Done when researchers seek to have enough respondents from a population of interest. The population is divided into groups and then a SRS is drawn from the groups. Researchers sometimes over sample within certain cells to account for non-response bias for groups Method can be helpful when trying to sample smaller populations (e.g., Blacks) Weighting has to be done to make sure the findings are representative of the population. Note: Most statistical programs allow researchers to create sample Source: Research Connections 12/6/2018 14

Oversampling with Stratified Sampling What is oversampling? Oversampling is the process by which researchers sample a larger share from a particular group than they would from the population Why oversample? It allows researchers to analyze data from sub-populations which might not be possible if drawn proportionally How to work with over-samples Design weights take care of the over sampling so that the weights make the data representative of the population Sample Size Calculator for Stratified Random Sampling Sources: UCLA Statistics and Pew Research Center and Stat Trek 12/6/2018 15

Survey Weighting Objective: Adjust for both sampling and non-response bias Outcome: Descriptive statistics should match the population Issue: Weights are great for descriptive statistics, but the jury is still out on their use with inferential statistics 12/6/2018 16

Types of Weights Design Weights: Design weights are used to correct for both over and under sampling within the population Post-Stratification Weights: Poststratification weights are used to correct for the error that is introduced when respondents based on a particular characteristic(s) may be more or less likely to respond to a survey when compared to others 12/6/2018 17

The Logistic Approach to Weighting Design Weights-for sample surveys Obtain your population file. This is the file from which you drew your sample from Keep in this file your demographic variables used for sampling and your student ID variable. Left join this data to your sample file. Create a sampled variable, where anyone who was sampled gets a 1 and anyone who wasn t gets a 0 Run a logistic regression model with the sample variable as the dependent variable and the variables used in your stratified sampling (e.g., Race/ethnicity discipline) as your predictor variables. Save the predicted probabilities Source: Johnson 2008 12/6/2018 18

The Logistic Approach to Weighting Post Stratification Weights-census and sampled surveys Obtain your sample (or population file for census surveys) file Left join that file with your respondent file on student ID Create a respondent variable, where anyone who responded to the survey gets a 1 and anyone who didn t gets a 0 Check your data to see if the distribution of respondents differs from the sampled (or whole population for census) population. Run a logistic regression model with the respondent variable as the dependent variable, and the variables used in your descriptive analysis (e.g., race/ethnicity discipline) as your predictor variables. Save the predicted probabilities Source: Johnson 2008 12/6/2018 19

The Logistic Approach to Weighting Create the final weights For Sampled Surveys: multiply the sampled probability with the respondent probability to get the combined probability, then use the formula below For Census Surveys: simply use the formula below Final_Weight= Combined probability*(population total/ Sum predicted probability) Source: Johnson 2008 12/6/2018 20

Weighting Results Population Respondents Weighted Counts 12/6/2018 21

Additional Survey Resources Jon Krosnick Professor of Communications, Political Science and Psychology at Stanford University An introduction to factor analysis ICPSR at the University of Michigan Pew Research Survey Methodology My contact information: Daniel.Byrd@ucop.edu 12/6/2018 22

Final Thoughts How can the material discussed today be useful in your work? For more information about survey research at the University of California visit https://www.ucop.edu/institutional-research-academic-planning/services/surveyservices/index.html