Doctoral Dissertation Boot Camp Quantitative Methods Kamiar Kouzekanani, PhD January 27, The Scientific Method of Problem Solving

Similar documents
Figure: Presentation slides:

Empirical Knowledge: based on observations. Answer questions why, whom, how, and when.

Formulating Research Questions and Designing Studies. Research Series Session I January 4, 2017

Lecture 4: Research Approaches

PTHP 7101 Research 1 Chapter Assignments

Prepared by: Assoc. Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies

Using a Likert-type Scale DR. MIKE MARRAPODI

HPS301 Exam Notes- Contents

Selecting the Right Data Analysis Technique

Correlational Research. Correlational Research. Stephen E. Brock, Ph.D., NCSP EDS 250. Descriptive Research 1. Correlational Research: Scatter Plots

Relationship, Correlation, & Causation DR. MIKE MARRAPODI

Overview of Non-Parametric Statistics

CHAPTER VI RESEARCH METHODOLOGY

HOW STATISTICS IMPACT PHARMACY PRACTICE?

investigate. educate. inform.

VARIABLES AND MEASUREMENT

Class 7 Everything is Related

What you should know before you collect data. BAE 815 (Fall 2017) Dr. Zifei Liu

Sample Exam Questions Psychology 3201 Exam 1

The Logic of Data Analysis Using Statistical Techniques M. E. Swisher, 2016

Statistics as a Tool. A set of tools for collecting, organizing, presenting and analyzing numerical facts or observations.

Quantitative Methods in Computing Education Research (A brief overview tips and techniques)

Choosing an Approach for a Quantitative Dissertation: Strategies for Various Variable Types

DATA is derived either through. Self-Report Observation Measurement


DATA GATHERING. Define : Is a process of collecting data from sample, so as for testing & analyzing before reporting research findings.

Design of Experiments & Introduction to Research

Use of the Quantitative-Methods Approach in Scientific Inquiry. Du Feng, Ph.D. Professor School of Nursing University of Nevada, Las Vegas

Analysis and Interpretation of Data Part 1

Summary & Conclusion. Lecture 10 Survey Research & Design in Psychology James Neill, 2016 Creative Commons Attribution 4.0

Basic Steps in Planning Research. Dr. P.J. Brink and Dr. M.J. Wood

Define the population Determine appropriate sample size Choose a sampling design Choose an appropriate research design

STATISTICS AND RESEARCH DESIGN

RESEARCH METHODS. A Process of Inquiry. tm HarperCollinsPublishers ANTHONY M. GRAZIANO MICHAEL L RAULIN

11-3. Learning Objectives

(CORRELATIONAL DESIGN AND COMPARATIVE DESIGN)

The Current State of Our Education

3 CONCEPTUAL FOUNDATIONS OF STATISTICS

11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES

On the purpose of testing:

CHAPTER 3 METHOD AND PROCEDURE

Non-Randomized Trials

Epidemiologic Methods I & II Epidem 201AB Winter & Spring 2002

Survey research (Lecture 1) Summary & Conclusion. Lecture 10 Survey Research & Design in Psychology James Neill, 2015 Creative Commons Attribution 4.

Survey research (Lecture 1)

Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN

Industrial and Manufacturing Engineering 786. Applied Biostatistics in Ergonomics Spring 2012 Kurt Beschorner

Statistics Guide. Prepared by: Amanda J. Rockinson- Szapkiw, Ed.D.

VALIDITY OF QUANTITATIVE RESEARCH

02a: Test-Retest and Parallel Forms Reliability

Reliability and Validity checks S-005

Psychology Research Process

04/12/2014. Research Methods in Psychology. Chapter 6: Independent Groups Designs. What is your ideas? Testing

Validity and Reliability. PDF Created with deskpdf PDF Writer - Trial ::

ADMS Sampling Technique and Survey Studies

REVIEW ARTICLE. A Review of Inferential Statistical Methods Commonly Used in Medicine

AP Psychology -- Chapter 02 Review Research Methods in Psychology

Chapter 1: Explaining Behavior

Communication Research Practice Questions

INTRODUCTION TO STATISTICS SORANA D. BOLBOACĂ

Importance of Good Measurement

A Brief (very brief) Overview of Biostatistics. Jody Kreiman, PhD Bureau of Glottal Affairs

Chapter 14: More Powerful Statistical Methods

PÄIVI KARHU THE THEORY OF MEASUREMENT

Vocabulary. Bias. Blinding. Block. Cluster sample

Single-Factor Experimental Designs. Chapter 8

UNIVERSITY OF THE FREE STATE DEPARTMENT OF COMPUTER SCIENCE AND INFORMATICS CSIS6813 MODULE TEST 2

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

Statistics is the science of collecting, organizing, presenting, analyzing, and interpreting data to assist in making effective decisions

Psych 1Chapter 2 Overview

From Bivariate Through Multivariate Techniques

Basic Biostatistics. Chapter 1. Content

26:010:557 / 26:620:557 Social Science Research Methods

Chapter 1: Review of Basic Concepts

Bijay Lal Pradhan, M Sc Statistics, FDPM (IIMA) 2

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Designing Psychology Experiments: Data Analysis and Presentation

Experimental Design. Dewayne E Perry ENS C Empirical Studies in Software Engineering Lecture 8

Day 11: Measures of Association and ANOVA

Journal of Biostatistics and Epidemiology

How to describe bivariate data

Quantitative Research Methods and Tools

Reliability & Validity Dr. Sudip Chaudhuri

By Hui Bian Office for Faculty Excellence

Measurement and Descriptive Statistics. Katie Rommel-Esham Education 604

2 Critical thinking guidelines

isc ove ring i Statistics sing SPSS

Examining differences between two sets of scores

Collecting & Making Sense of

Experimental Research. Types of Group Comparison Research. Types of Group Comparison Research. Stephen E. Brock, Ph.D.

Choosing the Correct Statistical Test

Unit outcomes. Summary & Conclusion. Lecture 10 Survey Research & Design in Psychology James Neill, 2018 Creative Commons Attribution 4.0.

Unit outcomes. Summary & Conclusion. Lecture 10 Survey Research & Design in Psychology James Neill, 2018 Creative Commons Attribution 4.0.

Readings: Textbook readings: OpenStax - Chapters 1 13 (emphasis on Chapter 12) Online readings: Appendix D, E & F

Lessons in biostatistics

Chapter 9 Experimental Research (Reminder: Don t forget to utilize the concept maps and study questions as you study this and the other chapters.

11/24/2017. Do not imply a cause-and-effect relationship

Lunchtime Seminar. Risper Awuor, Ph.D. Department of Graduate Educational and Leadership. January 30, 2013

EPIDEMIOLOGY. Training module

Transcription:

Doctoral Dissertation Boot Camp Quantitative Methods Kamiar Kouzekanani, PhD January 27, 2018 The Scientific Method of Problem Solving The conceptual phase Reviewing the literature, stating the problem, identifying a theoretical framework, formulating research questions and/or hypotheses. The methods phase - Design, subject selection, instrumentation, data collection methods, and data analysis plan. The empirical/analytic phase Analysis and interpretation of the data to answer the research questions and/or test the hypotheses. The dissemination phase Publishing and/or presenting the study. According to Kerlinger, scientific research is a systematic, controlled, empirical, and critical investigation of hypothetical propositions about the presumed relations among natural phenomena. Systematic so that the study may be replicated: 1) Direct replication to confirm, 2) Systematic replication to generalize. Extraneous/confounding variables must be controlled. The outcome variable must be measured. The study must be evaluated by experts. https://home.ubalt.edu/tmitch/632/kerlinder%20definitions.htm Typical sources of the research problem are experience, literature, theory, external sources, and social issues. Applied research is done to solve a problem and/or improve practice. Basic research is done for the sake of knowledge to understand processes and structures that underlie observed behaviors. Research Quality: Schoenfeld s Framework: Generalizability, Trustworthiness, Importance https://gse.berkeley.edu/sites/default/files/users/alan-h.-schoenfeld/schoenfeld_2008%202nd%20edn%20intl%20hbk.pdf MaxMinCon Principle (Kerlinger) Max Maximizing explained/true variance. Min Minimizing unexplained/error variance. Con Controlling for extraneous/confounding variables. 1

Statement of the Problem Specific Objectives Research Questions &/or Hypotheses Description Explanation or Prediction Causation Research Design Experimental versus Non-Experimental Studies Causal inferences may be drawn in experimental research but not in non-experimental studies. Manipulation, control, and observation are the three major ingredients of experimental research. True experiment employs random assignment. Quasi experiment employs intact groups. Do not confuse random assignment with random selection. Random assignment is done in experimental research to tend to internal validity so that we may conclude that, for example, the independent variable (e.g., teaching method) did indeed affect the outcome measure (e.g., achievement in mathematics). Random selection is related to external validity (generalizability of the results) Experimental versus Causal-Comparative Studies Also called ex post facto (after the fact), causal-comparative studies are conducted to predict or explain. Causal inferences cannot be drawn because the independent variable is not manipulated by the researcher. A comparison group is selected from a population which is similar to the "characteristic present" group except for the variable(s)/characteristic(s) that are being investigated. For example, comparing school dropouts with school completers if the purpose of the study is to predict school dropout. 2

Correlational Research To examine the degree of the relationship among two or more quantitative variables for the purpose of prediction or explanation. Note that correlation does not necessarily indicate causation. Simple correlation, rxy. One independent variable (e.g., test anxiety) and one dependent variable (e.g., test performance) Multiple correlation, Rx1x2x3.y. More than one independent variable (e.g., test anxiety, test preparation, & IQ) and one dependent variable (e.g., test performance) Survey Research is done to explore. To obtain information concerning the current status of phenomena. There are no independent or dependent variables and hypotheses may not be tested. Census Studying all members of a population. Longitudinal (prospective) Research - To study the same sample of subjects over an extended period of time. Retrospective Research It relies on recollection of past events. It may be called crosssectional research. Frame Population Target population Accessible population Sample Subject Selection Population validity is achieved if it is shown that the sample is representative of the population. Non-random/Non-probability Samples These samples are not representative of the population. Therefore, results may not be generalized from the sample to the population. Examples of such samples: Convenience sampling - Selecting cases based on their availability for the study. Quota sampling - Selecting cases based on required, exact number, or quotas, of persons of varying characteristics. Purposive/judgmental sampling - Selecting cases which are believed to be representative of a given population. Network/snowball sampling Recruiting a few subjects and then asking them to refer friends and acquaintances. Self-selected sampling Selecting subjects via the use of an invitation to which some may respond and agree to participate in the study. 3

Random/Probability samples These samples are representative of the population. Therefore, results may be generalized from the sample to the population. Examples of such samples: Simple random sampling - Each member of the study population has an equal chance/probability of being selected. Systematic random sampling - Each member of the study population is listed, a random start is designed, and then every k th (N/n) case from the list is selected. For example, N = 2,000 and we would like to select a sample of 200. Then, k = 2,000/200 = 10. A random number between 1 and k is selected for the starting point of the sampling, say 7. From there on, every kth element is chosen until the desired sample size is reached, that is, 7, 17, 27, etc. Stratified random sampling - Each member of the study population is assigned to a group or stratum, and then a simple random sample is selected from each stratum. Proportionate stratified sampling Population, N = 20,000, is divided into the following strata: stratum 1=6,000 stratum 2=4,000 stratum 3=4,500 stratum 4=5,500 To obtain a sample, n, of 1,000: n/n = 1,000/20,000 = 0.05 6,000 x 0.05 = 300 from stratum 1, 4,000 x 0.05 = 200 from stratum 2 4,500 x 0.05 = 225 from stratum 3, 5,500 x 0.05 = 275 from stratum 4 Disproportionate stratified sampling Single-stage cluster sampling - Intact groups, not individuals, are randomly selected. For example, suppose a sample of 1000 nurses is sought to examine nurses' job satisfaction in a city. Let's say, on the average, there are 100 nurses in the city's hospitals. Then 10 hospitals can be randomly selected and within each of these hospitals, all nurses would be included in the study. Multi-stage cluster sampling - Involves selecting clusters at random (e.g., from states to universities to faculty members). Multi-stage random sampling For example, select the clusters at random first, followed by random selection of the subjects in the clusters. 4

Instrumentation Instrument is the generic term for any type of measurement. Validity - The extent to which an instrument measures what one thinks it is measuring. Types of validity Face/Logical Validity - What the test appears superficially to measure? Content Validity - How adequately does the test content sample the larger domain of situations it represents. Criterion-Related Validity - Refers to the relationship between the scores on a measuring instrument and an independent external variable (criterion) believed to measure directly the behavior or characteristic in question. Obtaining a satisfactory criterion for success is a major problem. For example, what criterion can be used to validate a measure of teacher effectiveness? Who is to judge teacher effectiveness? What criterion can be used to test the predictive validity of a musical aptitude test? Validity Coefficient is the correlation between the test score and the criterion measure. Concurrent Validity To estimate present standing. Appropriate for tests developed for diagnosis. Johnny is neurotic. Predictive Validity To predict future performance. Appropriate for tests developed for selection and/or classification. Johnny is likely to become neurotic. Construct Validity - The extent to which the test may be said to measure a theoretical construct or trait. Convergent Validity Convergence among different methods designed to measure the same construct; for example, high correlation between two measures of anxiety. Or showing that the test correlates highly with other variables with which it should theoretically correlate. For example, correlation between scholastic aptitude test scores and school grades, or correlation between scholastic aptitude test scores and achievement test scores. Discriminant Validity It refers to distinctiveness of constructs, that is, showing that the test does not correlate highly with variables from which it should differ. For example, low correlation between scholastic aptitude test scores and musical aptitude test scores, or low correlation between measures of anxiety and introversion. Factor Analysis Experimental Intervention 5

Reliability - The degree of consistency of a measuring instrument in measuring whatever it is designed to measure. A. Errors of Measurement 1. Systematic Errors - They lean in one direction; scores tend to be all positive or all negative, or all high or all low. Error in this case is constant or biased. 2. Random Errors - They are self-compensating; scores tend now to lean this way, then the other way. Error in this case is random. There are several cues for this (e.g., chance, fatigue, mood, and the like). Procedures for Estimating Reliability A. Test-Retest Reliability - Coefficient of Stability. B. Equivalent-Forms (Parallel-Form) Reliability 1. Coefficient of Equivalence 2. Coefficient of Equivalence & Stability C. Measures of Internal Consistency 1. Split-Half Reliability Reliability if a function of the length of the instrument. 2. Cronbach's Coefficient Alpha - It ranges from 0 to 1.0 (0% to 100%). Test Form 1 2 Test Session 1 Internal Consistency Reliability Coefficient of Equivalence (Immediate Administration of Test Forms) 2 Coefficient of Stability Coefficient of Equivalence & Stability (Delayed Administration of Test Forms) 6

Data Collection Existing Data/Information Mail Survey/On-line Survey Telephone Survey Group Administered Survey Personal Interview Group Interview/Focus Group Observation Mass Media/Public Hearings Bio-physiologic Measures Likert Scale It is the most widely used scale in survey research. Levels of agreement, for example: 4 = Strongly Agree 3 = Agree 2 = Disagree 1 = Strongly Disagree Commonly Used Tests Intelligence Test To predict achievement in general. Aptitude Test To estimate future performance. Achievement Test To assess status upon the completion of the task. Diagnostic Test To identify a student s strengths and weaknesses in a subject matter. Personality Tests To assess personality traits. 7

Data Analysis I. Continuous (measurement) data - Use parametric statistical techniques: A. Differences 1. Two independent groups: t test for independent samples. 2. Two dependent groups: t test for correlated samples. 3. One grouping variable with multiple independent levels, and only one outcome measure: One-way ANOVA. Use ANCOVA if there are also extraneous (concomitant) variables. 4. More than one grouping variable with multiple independent levels, and only one outcome measure: Factorial ANOVA. 5. Multiple dependent means: Repeated-measures ANOVA. 6. One grouping variable with multiple independent levels, and more than one outcome measure: MANOVA, Discriminant analysis. B. Relationships 1. Two continuous variables: Pearson r 2. Several predictors, one criterion: Multiple correlation & multiple regression. Use Logistic regression if the criterion is a dichotomous (binary) variable. If the criterion variable has more than 2 levels, use discriminant analysis. 3. Several independent variables (predictors) and several dependent variables (criteria): Canonical correlation. II. Categorical (frequency) data - Use non-parametric statistical techniques: A. Differences 1. Two independent groups: Ordinal data: The Mann-Whitney-Wilcoxon U test. Nominal/count data: Chi-square goodness-of-fit test. 2. Two dependent groups: Ordinal data: The Wilcoxon Matched-Pairs Signed-Ranks test. Use McNemar test when analyzing dichotomous nominal data. 3. One grouping variable with multiple independent levels: Ordinal data: The Kruscal- Wallis test. Nominal/count data: Chi-square goodness-of-fit test. 4. Multiple dependent groups: The Friedman test. B. Relationships 1. Two ordinal variables: Spearman rho or Kendall tau. Employ Kappa coefficient if you wish to examine inter-judge agreement (reliability of ratings). 2. Two binary variables: Chi-square test of independence, followed by Phi coefficient. If there are cells with expected frequency < 5, use Fisher's Exact Test (note that this test does not use the chi-square approximation and should not be called a chi-square test). 3. Row by Column contingency table: Chi-square test of independence, followed by Cramer's statistic. 4. Several related samples: Kendall Coefficient of Concordance. 8

Human Subject Research, CITI Online Course http://research.tamucc.edu/compliance/citi.html IRB http://research.tamucc.edu/compliance/forms.html#irb References Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: LEA. Field, A. (2013). Discovering statistics using SPSS. Thousand Oaks, CA: Sage. Gall, M.D., Gall, J.P., & Borg, W.R. (2015). Applying educational research (7 th Ed). Boston, MA: Pearson. Patten, M.L. & Newhart, M. (2018). Understanding research methods (10 th Ed). NY, NY: Routledge. Pedhazur, E.J., & Schmelkin, L.P. (1991). Measurement, design, and analysis. Hillsdale, NJ: LEA. Salkind, N.J. (2012). 100 questions (and answers) about research methods. LA, CA: SAGE Salkind, N.J. (2015). 100 questions (and answers) about statistics. LA, CA: SAGE. Stevens, J. P. (2009). Applied multivariate statistics for the social sciences. New York, NY: Routledge. Stevens, J. P. (2007). Intermediate statistics. Mahwah, NJ: LEA. Vogt, W.P., Gardner, D.C., & Haeffele, L.M. (2012). When to use what research design. New York, NY: Guilford. 9