Research Landscape. Qualitative = Constructivist approach. Quantitative = Positivist/post-positivist approach Mixed methods = Pragmatist approach

Similar documents
Empirical Research Methods for Human-Computer Interaction. I. Scott MacKenzie Steven J. Castellucci

Last week: Introduction, types of empirical studies. Also: scribe assignments, project details

Quantitative Evaluation

Measuring the User Experience

Psychology Research Process

Designing Psychology Experiments: Data Analysis and Presentation

Human Computer Interaction - An Introduction

Evaluation: Scientific Studies. Title Text

Design of Experiments & Introduction to Research

Experimental Research in HCI. Alma Leora Culén University of Oslo, Department of Informatics, Design

Chapter 4: Defining and Measuring Variables

Designing Psychology Experiments: Data Analysis and Presentation

Experimental Research. Types of Group Comparison Research. Types of Group Comparison Research. Stephen E. Brock, Ph.D.

Topics. Experiment Terminology (Part 1)

Designing A User Study

Controlled Experiments

n Outline final paper, add to outline as research progresses n Update literature review periodically (check citeseer)

Review and Wrap-up! ESP 178 Applied Research Methods Calvin Thigpen 3/14/17 Adapted from presentation by Prof. Susan Handy

Empirical Research Methods for Human-Computer Interaction

CSC2130: Empirical Research Methods for Software Engineering

CONCEPTUAL FRAMEWORK, EPISTEMOLOGY, PARADIGM, &THEORETICAL FRAMEWORK

The Logic of Data Analysis Using Statistical Techniques M. E. Swisher, 2016

Lecture 18: Controlled experiments. April 14

9 research designs likely for PSYC 2100

Intro to HCI evaluation. Measurement & Evaluation of HCC Systems

Clever Hans the horse could do simple math and spell out the answers to simple questions. He wasn t always correct, but he was most of the time.

Experimental Psychology

Evidence Informed Practice Online Learning Module Glossary

Chapter 11: Experiments and Observational Studies p 318

Samples, Sample Size And Sample Error. Research Methodology. How Big Is Big? Estimating Sample Size. Variables. Variables 2/25/2018

2 Critical thinking guidelines

CHAPTER LEARNING OUTCOMES

Discussion Starter: Autism Awareness. A mini-reader & Lesson Ideas Created by: Primarily AU-Some 2013 & 2014

9.63 Laboratory in Cognitive Science

:: Slide 1 :: :: Slide 2 :: :: Slide 3 :: :: Slide 4 :: :: Slide 5 :: :: Slide 6 ::

VARIABLES AND MEASUREMENT

(CORRELATIONAL DESIGN AND COMPARATIVE DESIGN)

Designing Experiments. Scientific Method Review Parts of a Controlled Experiment Writing Hypotheses

Student Name: XXXXXXX XXXX. Professor Name: XXXXX XXXXXX. University/College: XXXXXXXXXXXXXXXXXX

Communication Research Practice Questions

Analysis A step in the research process that involves describing and then making inferences based on a set of data.

In this chapter we discuss validity issues for quantitative research and for qualitative research.

CogSysIII Lecture 4/5: Empirical Evaluation of Software-Systems

Experimental Design Part II

HPS301 Exam Notes- Contents

Higher Psychology RESEARCH REVISION

UNIT 3 & 4 PSYCHOLOGY RESEARCH METHODS TOOLKIT

Experimental Validity

CogSysIII Lecture 4/5: Empirical Evaluation of Software-Systems

Human intuition is remarkably accurate and free from error.

AP Psychology -- Chapter 02 Review Research Methods in Psychology

Lecture 3. Previous lecture. Learning outcomes of lecture 3. Today. Trustworthiness in Fixed Design Research. Class question - Causality

Designing Experiments... Or how many times and ways can I screw that up?!?

Introduction to Statistics and Research Design. Arlo Clark-Foos

Evaluation: Controlled Experiments. Title Text

The essential focus of an experiment is to show that variance can be produced in a DV by manipulation of an IV.

Process of Designing & Implementing a Research Project

Chapter 7: Descriptive Statistics

Who? What? What do you want to know? What scope of the product will you evaluate?

Non-Randomized Trials

Choosing and Using Quantitative Research Methods and Tools

Variables Research involves trying to determine the relationship between two or more variables.

Understanding and serving users

Chapter 1 Data Types and Data Collection. Brian Habing Department of Statistics University of South Carolina. Outline

Research Methodology. Characteristics of Observations. Variables 10/18/2016. Week Most important know what is being observed.

Research Approaches Quantitative Approach. Research Methods vs Research Design

Empirical Knowledge: based on observations. Answer questions why, whom, how, and when.

Research Methods. It is actually way more exciting than it sounds!!!!

Reliability of Ordination Analyses

Key Steps for Brief Intervention Substance Use:

The Power of Feedback

Chapter 5: Research Language. Published Examples of Research Concepts

Experimental Research I. Quiz/Review 7/6/2011

Students will demonstrate knowledge of an experiment by identifying different types of variables.

Biostatistics and Design of Experiments Prof. Mukesh Doble Department of Biotechnology Indian Institute of Technology, Madras

FORM C Dr. Sanocki, PSY 3204 EXAM 1 NAME

Goal: To understand the methods that scientists use to study abnormal behavior

DEPENDENT VARIABLE. TEST UNITS - subjects or entities whose response to the experimental treatment are measured or observed. Dependent Variable

Overview of the Logic and Language of Psychology Research

Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN

Between Fact and Fiction: Artifacts and Ethics in Social Research. Toh Wah Seng ABSTRACT

Statistics is the science of collecting, organizing, presenting, analyzing, and interpreting data to assist in making effective decisions

Essential Question: How do we incorporate good experimental design in investigations? Experiments

Choosing the Correct Statistical Test

factors that a ect data e.g. study performance of college students taking a statistics course variables include

Participatory Design

UM-03 Tutorial Evaluating the Effectiveness of User Models by Experiments! UM-03 Tutorial Evaluating the Effectiveness of User Models by Experiments!

ISC- GRADE XI HUMANITIES ( ) PSYCHOLOGY. Chapter 2- Methods of Psychology

Design an Experiment. Like a Real Scientist!!

What is Psychology? chapter 1

RESEARCH METHODS. A Process of Inquiry. tm HarperCollinsPublishers ANTHONY M. GRAZIANO MICHAEL L RAULIN

Sample Exam Questions Psychology 3201 Exam 1

Research Approach & Design. Awatif Alam MBBS, Msc (Toronto),ABCM Professor Community Medicine Vice Provost Girls Section

15.301/310, Managerial Psychology Prof. Dan Ariely Recitation 8: T test and ANOVA

The degree to which a measure is free from error. (See page 65) Accuracy

PTHP 7101 Research 1 Chapter Assignments

RESEARCH IN PSYCHOLOGY

Statistics. Nur Hidayanto PSP English Education Dept. SStatistics/Nur Hidayanto PSP/PBI

The Research Roadmap Checklist

Transcription:

Empirical Methods

Research Landscape Qualitative = Constructivist approach Build theory from data Quantitative = Positivist/post-positivist approach Mixed methods = Pragmatist approach

Experimental Analysis of Mode Switching Techniques in Pen-based User Interfaces Given limited input device, need to overload via modes Studied five different options: Non-preferred Barrel button Pressure Hold Eraser

Task

Task (2) Each participant did 1800 pie cutting tasks for $25 5 techniques X 9 blocks X 8 orientations X 5 cuts per screen 5-factor repeated measures design, with everything else designed to eliminate confounds Counterbalanced using a 5X5 Latin Square

Latin Square Design

Findings Time Error

Satisfaction

Synopses From William, Eddie, Val, Agnes, Hala, Yuexing Please post early! Also: Shorter is fine, but include some thoughts of your own

Comments? Hala Seemed like the paper was primarily about pressure-based mode switching Follow on to Ramos s work on pressure widgets. Other comments?

Questions for Project Definition How might one do a replication study on this paper? Note: This is a trick question I have done one with Jaime Ruiz and Bill Cowan Lead to 3 published research papers with Jaime and a model of non-preferred mode switching

Replication Study

Replication Study 2: Bitouch and bipad

Overview Wikipedia Any research which bases its findings on observations as a test of reality Accumulation of evidence results from planned research design Academic rigor determines legitimacy Frequently refers to scientific-style experimentation Many qualitative researchers also use this term

Positivism Describe only what we can measure/observe No ability to have knowledge beyond that Example: psychology Concentrate only on factors that influence behaviour Do not consider what a person is thinking Assumption is that things are deterministic

Post-Positivism A recognition that the scientific method can only answer question in a certain way Often called critical realism There exists objective reality, but we are limited in our ability to study it I am often influenced by my physics background when I talk about this We live in a closed, limited, 4-D universe of a specific size OK, so what s outside the boundary of our universe?

Implications of Post-Positivism The idea that all theory is fallible and subject to revision The goal of a scientist should be to disprove something they believe The idea of triangulation Different measures and observations tell you different things, and you need to look across these measures to see what s really going on The idea that biases can creep into any observation that you make, either on your end or on the subject s end

Experimental Biases in the RW Hawthorne effect/john Henry effect Experimenter effect/observer-expectancy effect Pygmalion effect Placebo effect Novelty effect

Hawthorne Effect Named after the Hawthorne Works factory in Chicago Original experiment asked whether lighting changes would improve productivity Found that anything they did improved productivity, even changing the variable back to the original level. Benefits stopped or studying stopped, the productivity increase went away Why? Motivational effect of interest being shown in them Also, the flip side, the John Henry effect Realization that you are in control group makes you work harder

Experimenter Effect A researcher s bias influences what they see Example from Wikipedia: music backmasking Once the subliminal lyrics are pointed out, they become obvious Dowsing Not more likely than chance The issue: If you expect to see something, maybe something in that expectation leads you to see it Solved via double-blind studies

Pygmalion effect Self-fulfilling prophecy If you place greater expectation on people, then they tend to perform better Studied teachers and found that they can double the amount of student progress in a year if they believe students are capable If you think someone will excel at a task, then they may, because of your expectation

Placebo Effect Subject expectancy If you think the treatment, condition, etc has some benefit, then it may Placebo-based anti-depressants, muscle relaxants, etc. In computing, an improved GUI, a better device, etc. Steve Jobs: http://www.youtube.com/watch?v=8jzbljxpbuu Bill Buxton: http://www.youtube.com/watch?v=arrus9cxuia

Novelty Effect Typically with technology Performance improves when technology is instituted because people have increased interest in new technology Examples: Computer-Assisted instruction in secondary schools, computers in the classroom in general, etc.

What can you test? Three things? Comparisons Models Exploratory analysis Reading was comparative with some nod to model validation

Concepts Randomization and control within an experiment Random assignment of cases to comparison groups Control of the implementation of a manipulated treatment variable Measurement of the outcome with relevant, reliable instruments Internal validity Did the experimental treatments make the difference in this case? Threats to validity History threats (uncontrolled, extraneous events) Instrumentation threats (failure to randomize interviewers/raters across comparison groups Selection threat (when groups are self-selected)

Themes HCI context Scott MacKenzie s tutorial Observe and measure Research questions User studies group participation User studies terminology User studies step by step summary Parts of a research paper

Observations and Measures Observations Manual (human observer) Using log sheets, notebooks, questionnaires, etc. Automatically Sensors, software, etc. Measurements (numerical) Nominal: Arbitrary assignment of value (1=male, 2=female Ordinal: Rank (e.g. 1 st, 2 nd, 3 rd, etc. Interval: Equal distance between values, but no absolute zero Ratio: Absolute zero, so ratios are meaningful (e.g. 40 wpm is twice as fast as 20 wpm typing) Given measurements and observations, we: Describe, compare, infer, relate, predict

Research Questions You have something to test ( a new technique) Untestable questions: Is the technique any good? What are the technique s strengths and weaknesses? Performance limits? How much practice is needed to learn? Testable questions seem narrower See example at right Scott MacKenzie s course notes

Research Questions (2) Internal validity Differences (in means) should be a result of experimental factors (e.g. what we are testing) Variances in means result from differences in participants Other variances are controlled or exist randomly External validity Extent to which results can be generalized to broader context Participants in your study are representative Test conditions can be generalized to real world These two can work against each other Problems with Usable

Example: Validity trade-off

Research Questions (3) Given a testable question (e.g. a new technique is faster) and an experimental design with appropriate internal and external validity You collect data (measurements and observations) Questions: Is there a difference Is the difference large or small Is the difference statistically significant Does the difference matter

Significance Testing R. A. Fisher (1890-1962) Considered designer of modern statistical testing Fisher s writings on Decision Theory versus Statistical Inference: An important difference is that Decisions are final while the state of opinion derived from a test of significance is provisional, and capable, not only of confirmation but also of revision (p.100). A test of significance... is intended to aid the process of learning by observational experience. In what it has to teach each case is unique, though we may judge that our information needs supplementing by further observations of the same, or of a different kind (pp. 100-101). Implications? What is the difference between statistical testing and qualitative research?

Testing Various tests t- and z-tests for two groups ANOVA and variants for multiple groups Regression analysis for modeling Also Binomial test for distributions CHI-Square test for tabular values Great on-line resources: http://www.statisticshell.com/ http://www.statisticshell.com/html/limbo.html

Research Design Participants Formerly subjects Use appropriate number (e.g. similar to what others have used) Independent variable What you manipulate, and what levels of iv were tested (test conditions) Confounding variables Variables that can cause variation Practice, prior knowledge

Research Design (2) Within subjects versus between subjects Within = repeated measures Sometimes a choice: Controls subject variances (easier stat significance), but can have interference Counterbalancing Typing on qwerty versus numeric keyboard Could learn phrases, some phrases could be easier, so vary order of devices Latin square http://www.yorku.ca/mack/rn-counterbalancing.html

Reading Experimental Results Sometimes you need to read carefully to fully appreciate what data is saying Consider Wedge reading