Introduction to Applied Research in Economics Kamiljon T. Akramov, Ph.D. IFPRI, Washington, DC, USA

Similar documents
Introduction to Applied Research in Economics

Introduction to Applied Research in Economics

Carrying out an Empirical Project

Key questions when starting an econometric project (Angrist & Pischke, 2009):

EMPIRICAL STRATEGIES IN LABOUR ECONOMICS

Marno Verbeek Erasmus University, the Netherlands. Cons. Pros

Empirical Strategies

The Limits of Inference Without Theory

Research Approach & Design. Awatif Alam MBBS, Msc (Toronto),ABCM Professor Community Medicine Vice Provost Girls Section

Identification with Models and Exogenous Data Variation

Write your identification number on each paper and cover sheet (the number stated in the upper right hand corner on your exam cover).

Lecture II: Difference in Difference. Causality is difficult to Show from cross

Instrumental Variables Estimation: An Introduction

Instrumental Variables I (cont.)

Empirical Methods in Economics. The Evaluation Problem

Causal Validity Considerations for Including High Quality Non-Experimental Evidence in Systematic Reviews

Version No. 7 Date: July Please send comments or suggestions on this glossary to

A note on evaluating Supplemental Instruction

Lecture II: Difference in Difference and Regression Discontinuity

Causal Inference in Observational Settings

Empirical Strategies 2012: IV and RD Go to School

Pros. University of Chicago and NORC at the University of Chicago, USA, and IZA, Germany

Regression Discontinuity Suppose assignment into treated state depends on a continuously distributed observable variable

ICPSR Causal Inference in the Social Sciences. Course Syllabus

Causal Methods for Observational Data Amanda Stevenson, University of Texas at Austin Population Research Center, Austin, TX

Quasi-experimental analysis Notes for "Structural modelling".

Ec331: Research in Applied Economics Spring term, Panel Data: brief outlines

Assessing Studies Based on Multiple Regression. Chapter 7. Michael Ash CPPA

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity

Citation for published version (APA): Ebbes, P. (2004). Latent instrumental variables: a new approach to solve for endogeneity s.n.

Using register data to estimate causal effects of interventions: An ex post synthetic control-group approach

TRACER STUDIES ASSESSMENTS AND EVALUATIONS

Evaluating Social Programs Course: Evaluation Glossary (Sources: 3ie and The World Bank)

August 29, Introduction and Overview

Experimental Research in HCI. Alma Leora Culén University of Oslo, Department of Informatics, Design

Group Assignment #1: Concept Explication. For each concept, ask and answer the questions before your literature search.

Chapter-2 RESEARCH DESIGN

Session 1: Dealing with Endogeneity

Research Design. Beyond Randomized Control Trials. Jody Worley, Ph.D. College of Arts & Sciences Human Relations

The Issue of Endogeneity within Theory-Based, Quantitative Management Accounting Research

Mastering Metrics: An Empirical Strategies Workshop. Master Joshway

Why randomize? Rohini Pande Harvard University and J-PAL.

Propensity Score Analysis Shenyang Guo, Ph.D.

Introduction to Behavioral Economics Like the subject matter of behavioral economics, this course is divided into two parts:

Critical dilemmas in the methodology of economics facing the crisis

Reassessing Quasi-Experiments: Policy Evaluation, Induction, and SUTVA. Tom Boesche

Risk of bias assessment for. statistical methods

Causality and Statistical Learning

MEA DISCUSSION PAPERS

Social Preferences of Young Adults in Japan: The Roles of Age and Gender

Lecture 4: Research Approaches

Class 1: Introduction, Causality, Self-selection Bias, Regression

Quantitative Methods. Lonnie Berger. Research Training Policy Practice

Session 3: Dealing with Reverse Causality

Applied Quantitative Methods II

Using ASPES (Analysis of Symmetrically- Predicted Endogenous Subgroups) to understand variation in program impacts. Laura R. Peck.

Introduction to Econometrics

What is: regression discontinuity design?

Experimental Design. Dewayne E Perry ENS C Empirical Studies in Software Engineering Lecture 8

Measuring Impact. Program and Policy Evaluation with Observational Data. Daniel L. Millimet. Southern Methodist University.

Empirical Strategies

Chapter 1: Explaining Behavior

Challenges of Observational and Retrospective Studies

A NON-TECHNICAL INTRODUCTION TO REGRESSIONS. David Romer. University of California, Berkeley. January Copyright 2018 by David Romer

RESEARCH METHODS. A Process of Inquiry. tm HarperCollinsPublishers ANTHONY M. GRAZIANO MICHAEL L RAULIN

Defining Evidence-Based : Developing a Standard for Judging the Quality of Evidence for Program Effectiveness and Utility

Correlation Neglect in Belief Formation

The Regression-Discontinuity Design

Chapter 11 Nonexperimental Quantitative Research Steps in Nonexperimental Research

Causality and Statistical Learning

Methods for Addressing Selection Bias in Observational Studies

Durkheim. Durkheim s fundamental task in Rules of the Sociological Method is to lay out

CASE STUDY 2: VOCATIONAL TRAINING FOR DISADVANTAGED YOUTH

EC352 Econometric Methods: Week 07

CSC2130: Empirical Research Methods for Software Engineering

PSYCHOLOGY AND THE SCIENTIFIC METHOD

ENDOGENEITY: AN OVERLOOKED THREAT TO VALIDITY OF CROSS-SECTIONAL RESEARCH. John Antonakis

THEORY DEVELOPMENT PROCESS

Lecture 01 Analysis of Animal Populations: Theory and Scientific Process

What Case Study means? Case Studies. Case Study in SE. Key Characteristics. Flexibility of Case Studies. Sources of Evidence

The Effects of Maternal Alcohol Use and Smoking on Children s Mental Health: Evidence from the National Longitudinal Survey of Children and Youth

PARTIAL IDENTIFICATION OF PROBABILITY DISTRIBUTIONS. Charles F. Manski. Springer-Verlag, 2003

Macroeconometric Analysis. Chapter 1. Introduction

2-Group Multivariate Research & Analyses

Introduction to Program Evaluation

Lecture 9 Internal Validity

MODULE 3 APPRAISING EVIDENCE. Evidence-Informed Policy Making Training

Isolating causality between gender and corruption: An IV approach Correa-Martínez, Wendy; Jetter, Michael

Behavioural Economics University of Oxford Vincent P. Crawford Michaelmas Term 2012

Economic incentives in household waste management: just a waste? A relational approach to agents and structures in household waste sorting

Journal of Political Economy, Vol. 93, No. 2 (Apr., 1985)

The Perils of Empirical Work on Institutions

Systematic Reviews in Education Research: When Do Effect Studies Provide Evidence?

CRIM 2130 Emergency Management Spring Chapter 7 Planning. School of Criminology and Justice Studies University of Massachusetts Lowell

Motherhood and Female Labor Force Participation: Evidence from Infertility Shocks

A Guide to Quasi-Experimental Designs

Problem Set 5 ECN 140 Econometrics Professor Oscar Jorda. DUE: June 6, Name

War and Local Collective Action in Sierra Leone: A Comment on the Use of Coefficient Stability Approaches

Syllabus.

Transcription:

Introduction to Applied Research in Economics Kamiljon T. Akramov, Ph.D. IFPRI, Washington, DC, USA Training Course on Applied Econometric Analysis June 1, 2015, WIUT, Tashkent, Uzbekistan

Why do we need applied economic research?

Introduction Excitement about applied economic research comes from the opportunity to learn about cause and effect in real world Economic theory suggests important relationships, often with policy implications, but virtually never provides quantitative magnitudes of causal effects Applied economics is more vulnerable than physical sciences to models whose validity never will be clear, because necessity for approximation is much stronger Nevertheless, economics has an important quantitative side, which cannot be escaped The important questions of the day are our questions

Introduction (cont.) Will loose monetary policy spark economic growth or just fan the fires of inflation? What is the effect of a 1 percentage point increase in broad money on inflation? What is the effect of a 1 percentage point increase in interest rates on output growth? Will mandatory health insurance really make people healthier? How does an additional year of education change earnings? What is the effect of farm size on agricultural productivity? How does agricultural diversity impact nutritional outcomes?

Introduction (cont.) Economists use of data and tools to answer cause-and-effect questions constitutes the field of applied econometrics Tools of applied econometrics are disciplined data analysis combined with statistical inference We are after truth, but truth is not revealed in full, and messages the data transmit requires interpretation Examples Comparisons made under ceteris paribus conditions may have a causal interpretation Ceteris paribus comparisons are difficult to engineer

Objectives of this course The main objective of the course is to strengthen the capacity of young researchers to conduct empirical research The emphasis of the course will be on empirical applications with a focus on detection of causality in data Ideally, we would like to have an experiment But most of the time we only have observational (non-experimental) data

In this course you will learn Furious Five of modern applied econometric research Regression, RCT, Regression discontinuity, Instrumental variables, Difference-indifferences Other methods for estimating causal effects using observational data Fixed effects, Propensity score matching, etc. Productive use of these techniques requires a solid conceptual foundation and a good understanding of the machinery of statistical inference Focus on applications theory is used only as needed to understand the why of methods Learn to evaluate the empirical (regression) analysis of others this means you will be able to read/understand empirical economics papers Get some hands-on experience with econometric analysis using real world data

Hypotheses in empirical research Construction of research hypotheses is an important step in applied economics research Hypotheses argue that one phenomenon or behavior causes or is associated with another phenomenon or behavior These phenomena are called constructs Various sources of support routinely used to develop hypotheses Theory and logical analysis (intuition) Past studies: authority and consensus Real life experiences and observations

Framework for assessing empirical research Construct validity: to what extent are the constructs of interest successfully operationalized in the research? Internal validity To what extent does the research design permit us to reach causal conclusions about the effect of the independent variable on the dependent variable? External validity To what extent can we generalize from our sample and setting to the populations and settings specified in research hypothesis?

Maximizing construct validity Constructs are the abstractions and any one construct can be measured in different ways because there are a variety of concrete representations of any abstract idea Variables are partial representations of constructs, and we work with them because they are measurable Operational definitions specify how to measure a variable Reliability of a measure is defined as the extent to which it s free from random error components Validity is defined as the extent to which the measure is free from systematic errors

Three components of variable Construct of interest Achievement Variable Achievement test Construct of disinterest Motivation, Ability, Anxiety, etc. Random errors Grading mistakes Recording mistakes Stock, J. and M. Watson (2010)

Maximizing internal validity Threats to the internal validity of regression studies Omitted variable basis Misspecification or incorrect functional form Measurement error Sample selection bias Simultaneous causality bias Unobserved heterogeneity All of these imply that conditional mean independence assumption fails in which case OLS is biased and inconsistent

Maximizing external validity A population is the aggregate of all of the cases that conform to some designated set of specifications All the people residing in a given country, all households residing in a given state, etc. A stratum may be defined by one or more specifications that divide the population into mutually exclusive segments Nonprobability versus probability sampling Probability sampling Simple random sampling gives each element in the population an equal chance of being selected Stratified random sampling

Better research design helps to mitigate threats to validity Edward Leamer s (1983) reflection on the state of empirical work in economics: take the con out of econometrics, hardly anyone takes data analysis seriously Empirical economics has experienced credibility revolution, with a consequent increase in policy relevance and scientific impact Randomized controlled trials (RCTs) Treatment and comparison groups RCT designs Randomized two-group design Before-after two group design Solomon four-group design Strengths and weaknesses of RCTs Experimental artifacts External validity issues

Quasi-experimental designs Difference-in-difference methods Card and Krueger (2000) study the effects of minimum wages on employment Instrumental variables Relevance Exogeneity Regression discontinuity Exploits precise knowledge of the rules determining treatment Angrist and Lavy (1999) study of the effects of class size on achievement Matching methods

Data: sources and types Experimental versus observational data Cross-sectional data: data on different entities for a single period of time Time-series data: data on a single entity collected at multiple time periods Panel or longitudinal data: data for multiple entities in which each entity is observed at two or more time periods

Questions? What is the policy question? What is the causal relationship of interest? What is the dependent variable and how is it measured? What is (are) the key independent variable(s)? What is the data source? What is the identification strategy? What is the mode of statistical inference? What are the main findings?

Future readings Chetty, R. Yes, Economics Is a Science. New York Times, Oct. 20, 2013. http://www.nytimes.com/2013/10/21/opinion/yes-economics-is-ascience.html?_r=0 Angrist, J. D. and J. S. Pischke. 2010. The Credibility Revolution in Empirical Economics: How Better Research Design is Taking the Con out of Econometrics. Journal of Economic Perspectives, vol. 24, No. 2, pp. 3-30. Angrist, J. D. and J. S. Pischke. 2009. Mostly Harmless Econometrics: An Empiricists s Companion. Princeton University Press. Chapter 1. Angrist, J. D. and J. S. Pischke. 2015. Mastering Metrics: The Path from Cause to Effect. Introduction

Thank you and good luck