Macroeconometric Analysis. Chapter 1. Introduction

Similar documents
Georgetown University ECON-616, Fall Macroeconometrics. URL: Office Hours: by appointment

Econometrics II - Time Series Analysis

Empirical Validation in Agent-Based Models

EUROPEAN COMMISSION ECONOMIC PAPERS

Citation for published version (APA): Ebbes, P. (2004). Latent instrumental variables: a new approach to solve for endogeneity s.n.

Testing the Predictability of Consumption Growth: Evidence from China

For general queries, contact

QED. Queen s Economics Department Working Paper No Business Cycle Theory and Econometrics

Quasi-experimental analysis Notes for "Structural modelling".

Bayesian and Classical Approaches to Inference and Model Averaging

In addition, a working knowledge of Matlab programming language is required to perform well in the course.

CARISMA-LMS Workshop on Statistics for Risk Analysis

Write your identification number on each paper and cover sheet (the number stated in the upper right hand corner on your exam cover).

Unlike standard economics, BE is (most of the time) not based on rst principles, but on observed behavior of real people.

Russian Journal of Agricultural and Socio-Economic Sciences, 3(15)

Introduction to Applied Research in Economics

MANAGEMENT. MGMT 0021 THE MANAGEMENT PROCESS 3 cr. MGMT 0022 FINANCIAL ACCOUNTING 3 cr. MGMT 0023 MANAGERIAL ACCOUNTING 3 cr.

Appendix III Individual-level analysis

On the Use of Holdout Samples for Model Selection

Comments on Limits of Econometrics by David Freedman. Arnold Zellner. University of Chicago

Separation of Intertemporal Substitution and Time Preference Rate from Risk Aversion: Experimental Analysis

Cancer survivorship and labor market attachments: Evidence from MEPS data

Introduction to Bayesian Analysis 1

Some Thoughts on the Principle of Revealed Preference 1

August 29, Introduction and Overview

Group Assignment #1: Concept Explication. For each concept, ask and answer the questions before your literature search.

Introduction to Applied Research in Economics

A Brief Introduction to Bayesian Statistics

Thoughts on DSGE Macroeconomics:

Introduction to Applied Research in Economics Kamiljon T. Akramov, Ph.D. IFPRI, Washington, DC, USA

SLAUGHTER PIG MARKETING MANAGEMENT: UTILIZATION OF HIGHLY BIASED HERD SPECIFIC DATA. Henrik Kure

Coherence and calibration: comments on subjectivity and objectivity in Bayesian analysis (Comment on Articles by Berger and by Goldstein)

Predictive Game Theory* Drew Fudenberg Harvard University

Behavioural Economics University of Oxford Vincent P. Crawford Michaelmas Term 2012

CHAPTER 2: TWO-VARIABLE REGRESSION ANALYSIS: SOME BASIC IDEAS

Regional convergence in Romania: from theory to empirics

ST440/550: Applied Bayesian Statistics. (10) Frequentist Properties of Bayesian Methods

On the Rise of Bayesian Econometrics after Cowles Foundation Monographs 10, 14

Sensory Cue Integration

Score Tests of Normality in Bivariate Probit Models

Audio: In this lecture we are going to address psychology as a science. Slide #2

Bayesian Logistic Regression Modelling via Markov Chain Monte Carlo Algorithm

Historical Developments in Bayesian Econometrics after Cowles Foundation Monographs 10, 14

Pros. University of Chicago and NORC at the University of Chicago, USA, and IZA, Germany

P E R S P E C T I V E S

ESTUDIOS SOBRE LA ECONOMIA ESPAÑOLA

Econ 270: Theoretical Modeling 1

Paper for presentation at the 2008 American Economics Association Meetings. Jan 6, 12:30 P.M. David Colander, Middlebury College

MEA DISCUSSION PAPERS

Reach and grasp by people with tetraplegia using a neurally controlled robotic arm

5 $3 billion per disease

Learning Deterministic Causal Networks from Observational Data

Statistical Methods and Reasoning for the Clinical Sciences

The four chapters in Part I set the stage. Chapter 1 moves from the implicit common sense theories of everyday life to explicit theories that are

Incentive compatibility in stated preference valuation methods

The role of theory in construction management: a call for debate

AN ANALYTIC HIERARCHY PROCESS (AHP) APPROACH.

INTRODUCTION TO ECONOMETRICS (EC212)

Chapter 1 Introduction to Educational Research

Chapter 1: Explaining Behavior

Inflation Persistence in the Euro Area

Psych 1Chapter 2 Overview

MS&E 226: Small Data

Models of Information Retrieval

PLANNING THE RESEARCH PROJECT

The Context of Detection and Attribution

Outline for the Course in Experimental and Neuro- Finance Elena Asparouhova and Peter Bossaerts

The Portuguese Non-Observed Economy: Brief Presentation

I. Introduction. Armin Falk IZA and University of Bonn April Falk: Behavioral Labor Economics: Psychology of Incentives 1/18

The Perils of Empirical Work on Institutions

Doing Quantitative Research 26E02900, 6 ECTS Lecture 6: Structural Equations Modeling. Olli-Pekka Kauppila Daria Kautto

Technical Specifications

A CROSS-CULTURAL COUNSELLING PROGRAMME FOR ADOLESCENTS TRAUMATISED BY FAMILY VIOLENCE

Reconciling Micro and Macro Estimates of the Frisch Labor Supply Elasticity

Ec331: Research in Applied Economics Spring term, Panel Data: brief outlines

EMPIRICAL STRATEGIES IN LABOUR ECONOMICS

Using Experimental Methods to Inform Public Policy Debates. Jim Murphy Presentation to ISER July 13, 2006

SUPPLEMENTARY INFORMATION

Bayesian performance

Correlation Neglect in Belief Formation

Bayesian Confidence Intervals for Means and Variances of Lognormal and Bivariate Lognormal Distributions

Ponatinib for treating chronic myeloid leukaemia and acute lymphoblastic leukaemia

A Bayesian Approach to Characterizing Heterogeneity of Rank-Dependent Expected Utility Models of Lottery Choices

The Limits of Inference Without Theory

Bayes Linear Statistics. Theory and Methods

Bounded Rationality, Taxation, and Prohibition 1. Suren Basov 2 and Svetlana Danilkina 3

What s Wrong with Economic Models? A Response to John Kay

Dear Participants in the Brunswik Society

Factor analysis of alcohol abuse and dependence symptom items in the 1988 National Health Interview survey

Using principal stratification to address post-randomization events: A case study. Baldur Magnusson, Advanced Exploratory Analytics PSI Webinar

Structural assessment of heritage buildings

Estimating drug effects in the presence of placebo response: Causal inference using growth mixture modeling

Disjunction Effect in Prisoners Dilemma: Evidences from an Eye-tracking Study

NBER WORKING PAPER SERIES CAUSAL PARAMETERS AND POLICY ANALYSIS IN ECONOMICS: A TWENTIETH CENTURY RETROSPECTIVE. James L. Heckman

A Drift Diffusion Model of Proactive and Reactive Control in a Context-Dependent Two-Alternative Forced Choice Task

Identification of population average treatment effects using nonlinear instrumental variables estimators : another cautionary note

Comparing treatments evaluated in studies forming disconnected networks of evidence: A review of methods

A Comparison of Methods of Estimating Subscale Scores for Mixed-Format Tests

IMPACT OF THE PRIMARY MARKET ON ECONOMIC GROWTH, PRODUCTIVITY AND ENTREPRENEURSHIP: A CROSS COUNTRY ANALYSIS

Applying the Experimental Paradigm to Software Engineering

Transcription:

Macroeconometric Analysis Chapter 1. Introduction Chetan Dave David N. DeJong

1 Background The seminal contribution of Kydland and Prescott (1982) marked the crest of a sea change in the way macroeconomists conduct empirical research. Under the empirical paradigm that remained predominant at the time, the focus was on purely statistical (or reducedform) characterizations of macroeconomic behavior. But the powerful criticism of this approach set forth by Lucas (1976), and the methodological contributions of, e.g., Sims (1972), and Hansen and Sargent (1980), sparked a transition to a new empirical paradigm. In this transitional stage, the formal imposition of theoretical discipline on reduced-form characterizations became established. This imposition most typically took the form of cross-equation restrictions, under which the stochastic behavior of a set of exogenous variables, coupled with forward-looking behavior on the part of economic decision makers, yield implications for the endogenous stochastic behavior of variables determined by the decision makers. Nevertheless, the imposition of such restrictions was indirect, and reduced-form speci cations continued to serve as the focal point of empirical research. Kydland and Prescott turned this emphasis on its head. As a legacy of their work, structural representations no longer serve as indirect sources of theoretical discipline to be imposed upon statistical speci cations. Instead, structural representations serve directly as the foundation upon which empirical work may be conducted. The methodologies used to implement structural representations as foundational empirical models have evolved over time and vary considerably. The same is true of the statistical formality with which this work is conducted. But despite the characteristic heterogeneity of methods employed in pursuing contemporary empirical macroeconomic research, the in uence of Kydland and 1

Prescott remains evident today. This text details the use of structural representations of macroeconomic activity as foundational models upon which empirical work may be conducted. It is intended primarily as an instructional guide for graduate students and practitioners, and so contains a distinct how-to perspective throughout. The methodologies it presents are organized roughly following the chronological evolution of the empirical literature in macroeconomics that has emerged following the work of Kydland and Prescott; thus it also serves as a reference guide. Throughout, the methodologies are demonstrated using applications to three benchmark structural models: a real-business-cycle model (fashioned after King, Plosser and Rebelo, 1988); a monetary model featuring monopolistically competitive rms (fashioned after Ireland, 2003); and an asset-pricing model (fashioned after Lucas, 1978). All three are examples of dynamic stochastic general equilibrium (DSGE) models. The empirical tools outlined in the text share a common foundation: a system of nonlinear expectational di erence equations derived as the solution of a DSGE model. The strategies outlined for implementing these models empirically typically involve the derivation of linear approximations of the systems, and then the establishment of various empirical implications of the systems. The primary focus of the text is on the latter component of these strategies: the text covers a wide range of alternative methodologies that have been used in pursuit of a wide range of empirical applications. Demonstrated applications include: parameter estimation; assessments of t and model comparison; forecasting; policy analysis; and measurement of unobservable facets of aggregate economic activity (e.g., measurement of productivity shocks). 2

2 Overview The text is divided into two parts. Part I presents foundational material included to help keep the text self-contained. Following this introduction, Chapter 2 outlines two preliminary steps often employed in converting a given DSGE model into an empirically implementable system of equations. The rst step involves the linear approximation of the model; the second step involves the solution of the resulting linearized system. The solution takes the form of a state-space representation for the observable variables featured in the model. Chapter 3 presents background material for conducting statistical analyses. First, the Kalman lter is presented as a means for pursuing likelihood-based analyses of state-space representations. Next, alternative approaches for pre- ltering the data to be analyzed are outlined. Pre- ltering is typically necessary to align what is being measured in the data with what is being modelled by the theory. For example, the separation of trend from cycle is necessary in confronting trending data with a model of business-cycle activity. The chapter concludes with an overview of summary statistics useful for characterizing stochastic timeseries behavior. Part I concludes in Chapter 4 with an introduction of the benchmark models that serve as examples in Part II of the text. In Part II, Chapters 5 through 9 are devoted to the following empirical methodologies: calibration; generalized and simulated method of moments; maximum likelihood estimation; Bayesian estimation; and indirect inference. Each chapter begins with a general presentation of the methodology, and then presents applications of the methodology to the three benchmark models in pursuit of alternative empirical objectives. The text concludes in Chapter 10 with an overview and comparison of methodologies. 3

Chapter 5 presents the most basic empirical methodology covered in the text: the calibration exercise, as pioneered by Kydland and Prescott (1982). Original applications of this exercise sought to determine whether models designed and parameterized to provide an empirically relevant account of long-term growth were also capable of accounting for the nature of short-term uctuations that characterize business-cycle uctuations, summarized using collections of sample statistics measured in the data. More generally, implementation begins with the identi cation of a set of empirical measurements that serve as constraints on the parameterization of the model under investigation: parameters are chosen to insure that the model can successfully account for these measurements. (It is often the case that certain parameters must also satisfy additional a priori considerations.) Next, implications of the duly parameterized model for an additional set of statistical measurements are compared with their empirical counterparts to judge whether the model is capable of providing a successful account of these additional features of the data. A challenge associated with this methodology arises in judging success, since this second-stage comparison is made in the absence of a formal statistical foundation. The generalized and simulated method of moments (GMM and SMM) methodologies presented in Chapter 6 serve as one way to address problems associated with the statistical informality associated with calibration exercises. Motivation for their implementation arises from the fact that there is statistical uncertainty associated with the set of empirical measurements that serve as constraints in the parameterization stage of a calibration exercise. For example, a sample mean has an associated sample standard error. Thus there is also statistical uncertainty associated with model parameterizations derived from mappings onto empirical measurements (referred to generally as statistical moments). GMM 4

and SMM methodologies account for this uncertainty formally: the parameterizations they yield are interpretable as estimates, featuring classical statistical characteristics. Moreover, if the number of moments used in obtaining parameter estimates exceeds the number of parameters being estimated (i.e., if the model in question is over-identi ed), the estimation stage also yields objective goodness-of- t measures that can be used to judge the model s empirical performance. GMM and SMM methodologies are examples of limited-information estimation procedures: they are based on a sub-set of information available in the data (the targeted moments selected in the estimation stage). An attractive feature of these methodologies is that they may be implemented in the absence of explicit assumptions regarding the underlying distributions that govern the stochastic behavior of the variables featured in the model. A drawback is that decisions regarding the moments chosen in the estimation stage are often arbitrary, and results (e.g., regarding t) can be sensitive to particular choices. In Chapter 7, the text presents the full-information counterpart to these methodologies: likelihood analysis. Given a distributional assumption regarding sources of stochastic behavior in a given model, the chapter details how the full range of empirical implications of the model may be assessed via conventional likelihood analysis, facilitated by use of the Kalman lter. Parameter estimates and model evaluation are facilitated in a straightforward way using maximum-likelihood techniques. Moreover, given model estimates, the implied behavior of unobservable variables present in the model (e.g., productivity shocks) may be inferred as a by-product of the estimation stage. A distinct advantage in working directly with structural models is that, unlike their reduced-form counterparts, one often has clear a priori guidance concerning their parame- 5

terization. For example, speci cations of subjective annual discount rates that exceed 10% may be dismissed out-of-hand as implausible. This motivates the subject of Chapter 8, which details the adoption of a Bayesian perspective in bringing full-information procedures to bear in working with structural models. From the Bayesian perspective, a priori views on model parameterization may be incorporated formally in the empirical analysis, in the form of a prior distribution. Coupled with the associated likelihood function via Bayes Rule, the corresponding posterior distribution may be derived; this conveys information regarding the relative likelihood of alternative parameterizations of the model, conditional on the speci ed prior and the observed data. In turn, conditional statements regarding the empirical performance of the model relative to competing alternatives, the implied behavior of unobservable variables present in the model, and likely future trajectories of model variables may also be derived. A drawback associated with the adoption of a Bayesian perspective in this class of models is that posterior analysis must be accomplished via the use of sophisticated numerical techniques; special attention is devoted to this problem in the chapter. While it is becoming increasingly common to address speci c empirical questions using structural models, reduced-form models remain a workhorse for empirical practitioners. The use of structural speci cations to account for and help interpret ndings obtained using reduced-form models is the objective of the indirect inference methodology, which is the subject of Chapter 9. The goal of indirect inference is to determine whether data simulated from a given structural model can be used to replicate estimates obtained using actual data in a given reduced-form analysis. If so, then a structural interpretation for the reduced-form phenomenon may be o ered. Note that indirect inference amounts to a moment-matching exercise, in which estimates obtained in the reduced-form analysis serve as target moments. 6

An attractive feature of this approach to structural estimation is that clear guidance regarding relevant moments is available a priori: this is provided by the reduced-form exercise under investigation. Chapter 10 concludes the text by providing an overview of the methodologies presented in the book. It focuses speci cally on comparing and contrasting them, and highlighting their relative strengths and weaknesses. It also o ers a tentative look at future directions for empirical work in macroeconomics. 3 Notation A common set of notation is used throughout the text in presenting models and empirical methodologies. A summary is as follows. Steady state values of levels of variables are denoted with an upper bar. For example, the steady state value of the level of output y t is denoted as y. Logged deviations of variables from steady state values are denoted using tildes; e.g., ey t = log. The vector x t denotes the collection of model variables, written y t y (unless indicated otherwise) in terms of logged deviations from steady state values; e.g., x t = [ey t ec t en t ] 0 : The vector t denotes the collection of structural shocks incorporated in the model, and t denotes the collection of expectational errors associated with intertemporal optimality conditions. Finally, the kx1 vector denotes the collection of deep parameters associated with the structural model. Log-linear approximations of structural models are represented as Ax t+1 = Bx t + C t+1 + D t+1 ; (1) 7

where the elements of the matrices A; B; C and D are functions of the structural parameters. Solutions of (1) are expressed as x t+1 = F ()x t + G() t+1 : (2) In (2), certain variables in the vector x t are unobservable, while others (or linear combinations of variables) are observable. Thus ltering methods such as the Kalman Filter must be employed to evaluate the system empirically. The Kalman Filter requires an observer equation linking observables to unobservables. Observable variables are denoted by X t ; where X t = H() 0 x t : (3) Finally, de ning e t+1 = G() t+1 ; the covariance matrix of e t+1 is given by Q() = Ee e 0 (4) Given assumptions regarding the stochastic nature of the structural shocks, (2)-(4) yield a log-likelihood function log L(Xj), where collects the parameters in F (); H() and Q(). Often, it will be convenient to take as granted mappings from to F; H and Q. In such cases the likelihood function will be written as L(Xj): The next chapter has two objectives. First, it outlines a procedure for mapping non-linear systems into (1). Next, it presents various solution methods for deriving (2), given (1). 8

References [1] Hansen, L.P. and T.J. Sargent. 1980. Formulating and Estimating Dynamic Linear Rational Expectations Models." Journal of Economic Dynamics and Control 2:7-46. [2] Ireland, P. 2004. Technology Shocks in the New Keynesian Model." Review of Economics and Statistics 86: 923-936. [3] King, R.G., C.I. Plosser and S.T. Rebelo. 1988. Production, Growth and Business Cycles. I. The Basic Neoclassical Model." Journal of Monetary Economics 21:195-232. [4] Kydland, F. E. and E. C. Prescott. 1982. Time to Build and Aggregate Fluctuations." Econometrica 50: 1345-1370. [5] Lucas, R. J. 1978. Asset Prices in an Exchange Economy." Econometrica. 46: 1429-1445.Kydland, F. and E.C. Prescott. 1982. Time to Build and Aggregate Fluctuations." Econometrica 50:1345-1370. [6] Lucas, R.E. 1976. Econometric Policy Evaluation: A Critique." The Phillips Curve and Labor Markets K. Brunner and A. Meltzer, Eds. Carnegie-Rochester Conference Series on Public Policy, Vol. 1. Amsterdam: North-Holland. [7] Sims, C.A. 1972. Money, Income, and Causality." American Economic Review 62:540-552. 9