Closed Coding. Analyzing Qualitative Data VIS17. Melanie Tory

Similar documents
Establishing Interrater Agreement for Scoring Criterion-based Assessments: Application for the MEES

Reliability and Validity checks S-005

02a: Test-Retest and Parallel Forms Reliability

Validity and reliability of measurements

Unequal Numbers of Judges per Subject

Validity and reliability of measurements

LEVEL ONE MODULE EXAM PART TWO [Reliability Coefficients CAPs & CATs Patient Reported Outcomes Assessments Disablement Model]

Study Endpoint Considerations: Final PRO Guidance and Beyond

A Critical Analysis of Inter-Coder Reliability Methods in Information Systems Research

Maltreatment Reliability Statistics last updated 11/22/05

On the usefulness of the CEFR in the investigation of test versions content equivalence HULEŠOVÁ, MARTINA

Making the Subjective Objective? Computer-Assisted Quantification of Qualitative Content Cues to Deception

NIH Public Access Author Manuscript Tutor Quant Methods Psychol. Author manuscript; available in PMC 2012 July 23.

how good is the Instrument? Dr Dean McKenzie

ASD Working Group Endpoints

Answers to end of chapter questions

A review of statistical methods in the analysis of data arising from observer reliability studies (Part 11) *

(true) Disease Condition Test + Total + a. a + b True Positive False Positive c. c + d False Negative True Negative Total a + c b + d a + b + c + d

Comparing Vertical and Horizontal Scoring of Open-Ended Questionnaires

Research Article Analysis of Agreement on Traditional Chinese Medical Diagnostics for Many Practitioners

Appraising Qualitative Research

Measures. David Black, Ph.D. Pediatric and Developmental. Introduction to the Principles and Practice of Clinical Research

Evaluating Quality in Creative Systems. Graeme Ritchie University of Aberdeen

What are Indexes and Scales

Inter-Rater Reliability Using SAS: A Practical Guide For Nominal, Ordinal, And Interval Data By Kilem Li Gwet READ ONLINE

11-3. Learning Objectives

The Assessment of Behavior and Practice Change

4 Diagnostic Tests and Measures of Agreement

A Coding System to Measure Elements of Shared Decision Making During Psychiatric Visits

CONTENT ANALYSIS OF COGNITIVE BIAS: DEVELOPMENT OF A STANDARDIZED MEASURE Heather M. Hartman-Hall David A. F. Haaga

AAPOR Exploring the Reliability of Behavior Coding Data

Comparison of the Null Distributions of

The study of communication is interdisciplinary, sharing topics, literatures,

Using a Likert-type Scale DR. MIKE MARRAPODI

Research Methods in Human Computer Interaction by J. Lazar, J.H. Feng and H. Hochheiser (2010)

Lab 4: Alpha and Kappa. Today s Activities. Reliability. Consider Alpha Consider Kappa Homework and Media Write-Up

Last week: Introduction, types of empirical studies. Also: scribe assignments, project details

Using JMP and R integration to Assess Inter-rater Reliability in Diagnosing Penetrating Abdominal Injuries from MDCT Radiological Imaging

Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist

COMPUTING READER AGREEMENT FOR THE GRE

Running head: ATTRIBUTE CODING FOR RETROFITTING MODELS. Comparison of Attribute Coding Procedures for Retrofitting Cognitive Diagnostic Models

Assessment of Interrater Agreement for Multiple Nominal Responses Among Several Raters Chul W. Ahn, City of Hope National Medical Center

Lessons in biostatistics

PEER REVIEW HISTORY ARTICLE DETAILS VERSION 1 - REVIEW

Chapter IR:VIII. VIII. Evaluation. Laboratory Experiments Logging Effectiveness Measures Efficiency Measures Training and Testing

Developing and Testing Hypotheses Kuba Glazek, Ph.D. Methodology Expert National Center for Academic and Dissertation Excellence Los Angeles

An Evaluation of Interrater Reliability Measures on Binary Tasks Using d-prime

Analysis of the Reliability and Validity of an Edgenuity Algebra I Quiz

Psychometric qualities of the Dutch Risk Assessment Scales (RISc)

Collecting & Making Sense of

Relationship, Correlation, & Causation DR. MIKE MARRAPODI

Investigating the Reliability of Classroom Observation Protocols: The Case of PLATO. M. Ken Cor Stanford University School of Education.

Agreement Coefficients and Statistical Inference

HKBU Institutional Repository

Interrater and Intrarater Reliability of the Assisting Hand Assessment

DATA is derived either through. Self-Report Observation Measurement

Year Area Grade 1/2 Grade 3/4 Grade 5/6 Grade 7+ K&U Recognises basic features of. Uses simple models to explain objects, living things or events.

Exploring Normalization Techniques for Human Judgments of Machine Translation Adequacy Collected Using Amazon Mechanical Turk

Guidelines for using the Voils two-part measure of medication nonadherence

Importance of Good Measurement

Evaluating behavior change interventions in terms of their component techniques. Susan Michie

The Paradox of Received Social Support

Brief Report: Interrater Reliability of Clinical Diagnosis and DSM-IV Criteria for Autistic Disorder: Results of the DSM-IV Autism Field Trial

Week 17 and 21 Comparing two assays and Measurement of Uncertainty Explain tools used to compare the performance of two assays, including

1. Evaluate the methodological quality of a study with the COSMIN checklist

MBA SEMESTER III. MB0050 Research Methodology- 4 Credits. (Book ID: B1206 ) Assignment Set- 1 (60 Marks)

The Meta on Meta-Analysis. Presented by Endia J. Lindo, Ph.D. University of North Texas

Survey Question. What are appropriate methods to reaffirm the fairness, validity reliability and general performance of examinations?

FUNCTIONAL CONSISTENCY IN THE FACE OF TOPOGRAPHICAL CHANGE IN ARTICULATED THOUGHTS Kennon Kashima

Body Weight Behavior

Tibor Pólya. Hungarian Academy of Sciences, Budapest, Hungary

Replicability of Transaction and Action Coding in the Map Task Corpus*

PROSPERO International prospective register of systematic reviews

THE ESTIMATION OF INTEROBSERVER AGREEMENT IN BEHAVIORAL ASSESSMENT

PSYCHOMETRIC PROPERTIES OF CLINICAL PERFORMANCE RATINGS

Introduction. 1.1 Facets of Measurement

The Switch Access Measure (SAM) Where are we now and where we are going?

EPIDEMIOLOGY. Training module

Research Methodologies

Introductory: Coding

ADMS Sampling Technique and Survey Studies

Basic SPSS for Postgraduate

Measurement page 1. Measurement

Inter-Rater Reliability of the Gross Motor Penonnance Measure

Interpreting Kappa in Observational Research: Baserate Matters

Using Process Methods to Study Subtitling

Testing the measurement equivalence of electronically migrated patient-reported outcome (PRO) instruments: Challenges and Solutions

Validation of an Analytic Rating Scale for Writing: A Rasch Modeling Approach

Use the following checklist to ensure that video captions are compliant with accessibility guidelines.

The What, When, Why, and How of Unobtrusive Observation

QUALITATIVE RESEARCH. Not everything that counts can be counted, and not everything that can be counted counts (Albert Einstein)

Relationship Between Intraclass Correlation and Percent Rater Agreement

Research Questions and Survey Development

Survey Research. We can learn a lot simply by asking people what we want to know... THE PREVALENCE OF SURVEYS IN COMMUNICATION RESEARCH

Exploring Differences in Measurement and Reporting of Classroom Observation Inter-Rater Reliability

Evaluation of Qualitative Studies - VAKS

COMMITMENT &SOLUTIONS UNPARALLELED. Assessing Human Visual Inspection for Acceptance Testing: An Attribute Agreement Analysis Case Study

N Utilization of Nursing Research in Advanced Practice, Summer 2008

When do Women get a Voice? Explaining the Presence of Female News Sources in Belgian News Broadcasts ( )

Captioning Your Video Using YouTube Online Accessibility Series

Transcription:

Closed Coding Analyzing Qualitative Data Tutorial @ VIS17 Melanie Tory

A code in qualitative inquiry is most often a word or short phrase that symbolically assigns a summative, salient, essence capturing, and/or evocative attribute for a portion of language based or visual data [Saldaña, J. (2013). The coding manual for qualitative researchers. Los Angeles: SAGE Publications.]

What is closed coding? Identifying and marking interesting items using a pre established coding scheme Usually done on video, but could be transcripts, audio, etc.

When to use closed coding? Your research question involves counting or timing events You know a priori what the interesting events are You cannot mark the events by automatic logging

Closed vs. Open Coding Closed Coding Suitable for timing or counting events Must know in advance what is relevant to your research question No richness Confirmatory research questions Largely quantitative Open Coding Suitable for capturing and understanding meaning in your data You may have only some a priori idea about what is interesting Captures rich nuance Open ended research questions Purely qualitative

Where do the (closed) codes come from? Previous research or theory May be obvious given your research questions / hypotheses Questions or topics from your interview guide Establish the coding scheme during the analysis: If you already have a good idea: View the video once or twice and agree on a set of codes If you have no idea: Step 1: Open coding (later) Step 2: Closed coding: Revisit your data to apply the codes consistently throughout all your video.

[Mahyar et al., VAST 2014]

What should you code? Depends on your research question Typical coded items may include: Events Behaviors States of the people, system, or environment Topics mentioned in an interview Make sure you know exactly what you are coding and how you will use that information. Coding is very time consuming!

Code Types Descriptive coding Codes are used to describe what is in the data Analytic coding Codes are based on the analytical thinking by the researcher about why what is occurring in the data might be happening

Code Types Point events Single point Duration events Mark start and end enables calculation of duration Coding durations is much more time consuming. Only do it if you are certain you need duration information.

Code Types Flat Coding e.g. position of partners around a table: Same side Opposite sides Kitty corner Hierarchical Coding e.g. User goals: Share: Share hypothesis Share resources Share idea Identify Identify object Identify person

Coding scheme Before applying closed coding, it is critical to establish a clear coding scheme List all the codes and their meanings Make sure all coders understand the scheme Code a portion together, discuss disagreements, and modify the scheme. Test your coding scheme by having another person use it. See if they understand it the same way you do.

Software Tools Pretty much necessary for closed coding of video Allow you to mark events via keystroke as the video plays automatically logs the event + time Examples: Noldus Observer, MAXQDA

Multiple coders + inter coder agreement Can your coding scheme be applied consistently by different researchers? Inter coder agreement measures "the extent to which the different judges tend to assign exactly the same rating to each object" (Tinsley & Weiss, 2000, p. 98); Often expected for papers that involve closed coding (but not open coding) 2 coders is typical Both must code some subset of the data with the same coding scheme

What about inter coder agreement? or inter coder reliability, or inter rater reliability terms for the extent to which independent coders evaluate some data and reach the same conclusion measures "the extent to which the different judges tend to assign exactly the same rating to each object" (Tinsley & Weiss, 2000, p. 98); in computational linguistics sometimes used to train an algorithm to do as well as a human

Inter coder agreement measures Cohen s kappa works for 2 coders, assumes nominal data Gwet, K. L. (2014) "Handbook of Inter Rater Reliability (4th Edition) Fleiss kappa works for any fixed number of coders, assumes nominal data Shrout, P. and Fleiss, J. L. (1979) "Intraclass correlation: uses in assessing rater reliability" in Psychological Bulletin. Vol. 86, No. 2, pp. 420 428 Krippendorf s alpha generalizes several specialized agreement coefficients by accepting any number of observers, being applicable to nominal, ordinal, interval, and ratio levels of measurement, being able to handle missing data, and being corrected for small sample sizes (https://en.wikipedia.org/wiki/inter rater_reliability) Krippendorff, K. (2013). Content analysis: An introduction to its methodology, 3rd Edition. Thousand Oaks, CA: Sage. pp. 221 250

Recap: Closed Coding Useful when you need to time or count events that can only be identified by a human observer Confirmatory research questions No richness, unlike open coding Advice: Define and test your coding scheme in advance Know your research question and code only what you need Get help from a tool