Measurement is the process of observing and recording the observations. Two important issues:

Similar documents
PÄIVI KARHU THE THEORY OF MEASUREMENT

11-3. Learning Objectives

Validity and Reliability. PDF Created with deskpdf PDF Writer - Trial ::

Handout 5: Establishing the Validity of a Survey Instrument

Variables in Research. What We Will Cover in This Section. What Does Variable Mean?

Variables in Research. What We Will Cover in This Section. What Does Variable Mean? Any object or event that can take on more than one form or value.

ADMS Sampling Technique and Survey Studies

Measures. David Black, Ph.D. Pediatric and Developmental. Introduction to the Principles and Practice of Clinical Research

Construct Validation of Direct Behavior Ratings: A Multitrait Multimethod Analysis

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity

VARIABLES AND MEASUREMENT

Ch. 11 Measurement. Measurement

Measurement issues in the assessment of psychosocial stressors at work

Ch. 11 Measurement. Paul I-Hai Lin, Professor A Core Course for M.S. Technology Purdue University Fort Wayne Campus

Variables in Research. What We Will Cover in This Section. What Does Variable Mean? Any object or event that can take on more than one form or value.

VALIDITY OF QUANTITATIVE RESEARCH

Reliability & Validity: Qt RD & Ql RD. Neuman (2003: Chap. 7: , esp )

Reliability & Validity Dr. Sudip Chaudhuri

32.5. percent of U.S. manufacturers experiencing unfair currency manipulation in the trade practices of other countries.

Validity, Reliability and Classical Assumptions

Constructing Indices and Scales. Hsueh-Sheng Wu CFDR Workshop Series June 8, 2015

N Utilization of Nursing Research in Advanced Practice, Summer 2008

Reliability and Validity checks S-005

Validation of Scales

Research Questions and Survey Development

STATISTICAL CONCLUSION VALIDITY

Introduction to Reliability

Collecting & Making Sense of

CHAPTER III RESEARCH METHODOLOGY

Science, Society, and Social Research (1) Benjamin Graham

26:010:557 / 26:620:557 Social Science Research Methods

Research Questions, Variables, and Hypotheses: Part 2. Review. Hypotheses RCS /7/04. What are research questions? What are variables?

Validity and Quantitative Research. What is Validity? What is Validity Cont. RCS /16/04

Issues in Clinical Measurement

Variable Measurement, Norms & Differences

In 2013, a group of researchers published a paper evaluating the construct

Designing a Questionnaire

Political Science 15, Winter 2014 Final Review

Importance of Good Measurement

Sampling. (James Madison University) January 9, / 13

Test Validity. What is validity? Types of validity IOP 301-T. Content validity. Content-description Criterion-description Construct-identification

26:010:557 / 26:620:557 Social Science Research Methods

Validity refers to the accuracy of a measure. A measurement is valid when it measures what it is suppose to measure and performs the functions that

Lecture Week 3 Quality of Measurement Instruments; Introduction SPSS

Work, Employment, and Industrial Relations Theory Spring 2008

The knowledge and skills involved in clinical audit

10/9/2018. Ways to Measure Variables. Three Common Types of Measures. Scales of Measurement

Using the Rasch Modeling for psychometrics examination of food security and acculturation surveys

Basic concepts and principles of classical test theory

DATA is derived either through. Self-Report Observation Measurement

INTERPARENTAL CONFLICT, ADOLESCENT BEHAVIORAL PROBLEMS, AND ADOLESCENT COMPETENCE: CONVERGENT AND DISCRIMINANT VALIDITY

Patient Reported Outcomes (PROs) Tools for Measurement of Health Related Quality of Life

Chapter 11 Nonexperimental Quantitative Research Steps in Nonexperimental Research

Do not write your name on this examination all 40 best

Lecture 4: Research Approaches

Group Assignment #1: Concept Explication. For each concept, ask and answer the questions before your literature search.

Non-Randomized Trials

SEMINAR ON SERVICE MARKETING

In this chapter we discuss validity issues for quantitative research and for qualitative research.

Chapter 4: Defining and Measuring Variables

Measurement, validity and reliability

Reliability and Validity


DATA GATHERING METHOD

Research Methodologies

Patrick Breheny. January 28

Validity of measurement instruments used in PT research

02a: Test-Retest and Parallel Forms Reliability

Reliability AND Validity. Fact checking your instrument

RESEARCH METHODS. Winfred, research methods, ; rv ; rv

Psychology: The Science

RESEARCH METHODS. Winfred, research methods,

Making a psychometric. Dr Benjamin Cowan- Lecture 9

Psychometric Instrument Development

Associate Prof. Dr Anne Yee. Dr Mahmoud Danaee

Designing Experiments... Or how many times and ways can I screw that up?!?

Use of the Quantitative-Methods Approach in Scientific Inquiry. Du Feng, Ph.D. Professor School of Nursing University of Nevada, Las Vegas

Psychology 205, Revelle, Fall 2014 Research Methods in Psychology Mid-Term. Name:

Using self-report questionnaires in OB research: a comment on the use of a controversial method

Factor structure of the Schalock and Keith Quality of Life Questionnaire (QOL-Q): validation on Mexican and Spanish samples

Measuring impact. William Parienté UC Louvain J PAL Europe. povertyactionlab.org

Review and Wrap-up! ESP 178 Applied Research Methods Calvin Thigpen 3/14/17 Adapted from presentation by Prof. Susan Handy

Review of Various Instruments Used with an Adolescent Population. Michael J. Lambert

Research Approach & Design. Awatif Alam MBBS, Msc (Toronto),ABCM Professor Community Medicine Vice Provost Girls Section

Lecture 11: Measurement to Hypotheses. Benjamin Graham

What Constitutes a Good Contribution to the Literature (Body of Knowledge)?

Survey Question. What are appropriate methods to reaffirm the fairness, validity reliability and general performance of examinations?

AP Psychology -- Chapter 02 Review Research Methods in Psychology

The Current State of Our Education

Chapter 1 Data Types and Data Collection. Brian Habing Department of Statistics University of South Carolina. Outline

- Triangulation - Member checks - Peer review - Researcher identity statement

SOCIOLOGICAL RESEARCH PART I. If you've got the truth you can demonstrate it. Talking doesn't prove it. Robert A. Heinlein

9 research designs likely for PSYC 2100

HPS301 Exam Notes- Contents

Underlying Theory & Basic Issues

9.63 Laboratory in Cognitive Science

MEASUREMENT THEORY 8/15/17. Latent Variables. Measurement Theory. How do we measure things that aren t material?

Conducting Research in the Social Sciences. Rick Balkin, Ph.D., LPC-S, NCC

Welcome to this series focused on sources of bias in epidemiologic studies. In this first module, I will provide a general overview of bias.

Transcription:

Farzad Eskandanian

Measurement is the process of observing and recording the observations. Two important issues: 1. Understanding the fundamental ideas: Levels of measurement: nominal, ordinal, interval and ratio Reliability 2. Types of measures: Survey research: design of interviews and questionnaires. Scaling: methods developing a scale Qualitative research: non-numerical measurement approaches.

Generalizing from your program or measures to the concept of your program or measures. A labeling issue. Examples: A Head Start program. Is the label accurate? A measure that you term "self esteem" is that what you were really measuring? The degree to which a test measures what it claims, to be measuring.

We really want to talk about the validity of any operationalization. Operationalization?! Any time you translate a concept or construct into a functioning and operating reality, Be concerned about how well you did the translation. Construct validity is the approximate truth of the conclusion that your operationalization accurately reflects its construct.

Different ways you can demonstrate different aspects of construct validity: Translation Validity: degree to which you accurately translated your construct into the operationalization. Face Validity: Look at the operationalization and see whether "on its face" it seems like a good translation of the construct. Content Validity: Check the operationalization against the relevant content domain for the construct. It is not always easy to decide on the criteria that constitute the content domain.

Checks the performance of your operationalization against some criterion. make a prediction about how the operationalization will perform based on our theory of the construct. Types: Predictive validity: Assess the operationalization's ability to predict something it should theoretically be able to predict. Concurrent validity: Assess the operationalization's ability to distinguish between groups that it should theoretically be able to distinguish between. Convergent validity: Examine the degree to which the operationalization is similar to other operationalizations that it theoretically should be similar to. Discriminant validity Examine the degree to which the operationalization is not similar to other operationalizations that it theoretically should be not be similar to.

Idea of Construct Validity: Definitionalist: Precise absolute definitions. Rationalist: Meanings differs relatively, Not absolutely. In court Tell "the truth, the whole truth and nothing but the truth. In our context: Our measure should reflect "the construct, the whole construct, and nothing but the construct. Example: "self esteem, all of self esteem, and nothing but self esteem?"

To establish construct validity the following conditions are required: Operationalize within a semantic net. Control the operationalization of the construct, so it looks similar to what you theoretically mean. Provide evidence that your data support your theoretical view of the relations among constructs.

Show that: Correspondence or convergence between similar constructs, and Discriminate between dissimilar constructs. Correlations between theoretically similar measures should be "high. While correlations between theoretically dissimilar measures should be "low. Note convergent correlations should always be higher than the discriminant ones.

We theorize that all four items reflect the idea of self esteem. Observations show the intercorrelations of four items. Pattern of correlations states that the four items are converging on the same idea (construct).

Show that measures that should not be related in reality are not related.

Inadequate Preoperational Explication of Constructs You didn't do a good enough job of defining (operationally) what you mean by the construct. Think more about the concepts. Use concept mappings and experts opinions. Mono-Operation Bias If you only use a single version of a program in a single place at a single point in time, then you are not capturing the whole picture. Solution: try to implement multiple versions of your program. Mono-Method Bias refers to your measures or observations, not to your programs or causes. Solution: try to implement multiple measures of key constructs

Interaction of Different Treatments Can you really label the program effect as a consequence of your program? Interaction of Testing and Treatment Restricted Generalizability Across Constructs Unintended consequences of the program. Confounding Constructs and Levels of Constructs Your label is not a good description for what you implemented. Social Threats: Hypothesis Guessing Evaluation Apprehension Experimenter Expectancies

To provide evidence that your measure has construct validity. This model links the conceptual/theoretical realm with the observable one, because this is the central concern of construct validity.

A correlation matrix. The Reliability Diagonal (monotrait-monomethod) The Validity Diagonals (monotrait-heteromethod) The Heterotrait-Monomethod Triangles Heterotrait-Heteromethod Triangles The Monomethod Blocks The Heteromethod Blocks

Linking two patterns: Theoretical and Observational A test of significance is usually required: t-test ANOVA

About quality of measurement. Reliability is the "consistency" or "repeatability" of your measures. True Score Theory True ability + Random error A measure that has no random error (is all true score) is perfectly reliable.

Random error or Noise. Systematic error or Bias.

Usually we don t know about true score. Only observation X. Using two observations we can see the true score is shared. But we can t calculate the variance of true score, so we estimate it. corr X %, X ' = *+,(. /,. 0 ) 23. / 23(. 0 )

Inter-Rater or Inter-Observer Reliability Used to assess the degree to which different raters/observers give consistent estimates of the same phenomenon. Test-Retest Reliability Used to assess the consistency of a measure from one time to another. Parallel-Forms Reliability Used to assess the consistency of the results of two tests constructed in the same way from the same content domain. Internal Consistency Reliability Used to assess the consistency of results across items within a test.

Think of the center of the target as the concept that you are trying to measure.