1m ago

0 Views

0 Downloads

1.21 MB

50 Pages

Tags:

Transcription

NBER WORKING PAPER SERIESMODELING MODEL UNCERTAINTYAlexei OnatskiNoah WilliamsWorking Paper 9566http://www.nber.org/papers/w9566NATIONAL BUREAU OF ECONOMIC RESEARCH1050 Massachusetts AvenueCambridge, MA 02138March 2003The original version of this paper was prepared for the 2002 ISOM in Frankfurt. We thank the participantsin the seminar, especially our discussants Glenn Rudebusch and Ulf Söderström for detailed and insightfuldiscussions and Chris Sims for useful comments. We are extremely grateful to Jim Stock for inviting us toparticipate and for providing helpful comments. We also thank Glenn Rudebusch and Athanasios Orphanidesfor providing us with data. Finally, we thank the editor, Roberto Perotti, and three anonymous referees forcomments and suggestions that greatly improved the substance and presentation of the paper. Alexei Onatskithanks Columbia University for providing a financial support in the form of a Council Grant for summerresearch. The views expressed herein are those of the authors and not necessarily those of the National Bureauof Economic Research. 2003 by Alexei Onatski and Noah Williams. All rights reserved. Short sections of text not to exceed twoparagraphs, may be quoted without explicit permission provided that full credit including notice, is givento the source.

Modeling Model UncertaintyAlexei Onatski and Noah WilliamsNBER Working Paper No. 9566March 2003JEL No. E5ABSTRACTRecently there has been a great deal of interest in studying monetary policy under model uncertainty.We point out that different assumptions about the uncertainty may result in drastically different"robust" policy recommendations. Therefore, we develop new methods to analyze uncertainty aboutthe parameters of a model, the lag specification, the serial correlation of shocks, and the effects ofreal time data in one coherent structure. We consider both parametric and nonparametricspecifications of this structure and use them to estimate the uncertainty in a small model of the USeconomy. We then use our estimates to compute robust Bayesian and minimax monetary policyrules, which are designed to perform well in the face of uncertainty. Our results suggest that theaggressiveness recently found in robust policy rules is likely to be caused by overemphasizinguncertainty about economic dynamics at low frequencies.Alexei OnatskiDepartment of EconomicsColumbia University420 West 118th StreetNew York, NY 10027ao2027@columbia.eduNoah WilliamsDepartment of EconomicsPrinceton UniversityPrinceton, NJ 08544-1021and NBERnoahw@princeton.edu

1IntroductionUncertainty is pervasive in economics, and this uncertainty must be faced continually bypolicymakers. Poor quality of data, unpredictable shocks hitting the economy, econometricerrors in estimation, and a lack of understanding of the fundamental economic mechanismsare among many different factors causing the uncertainty. Often, the uncertainty is so largethat the effects of policy decisions on the economy are thought to be ambiguous. Under suchan extreme uncertainty, any knowledge about the structure of uncertainty, as scarce as itmight be, is very informative and must be useful. In this paper we therefore consider thestructural modeling of the uncertainty relevant for policymaking.We start by supposing that through some process of theorizing and data analysis, policymakers have arrived at a reference model of the economy. They want to use this model toset policy, but are concerned about potential uncertain deviations from it. There are threecomponent blocks of our uncertainty model: first, uncertainty about the parameters of thereference model (including uncertainty about the model’s order); second, uncertainty aboutthe serial correlation properties of shocks; and third, uncertainty about data quality. Ouranalysis is focused on a simple, empirically-based macroeconomic model developed by Rudebusch and Svensson (1999) (henceforth, the RS model). In order to specify and measurethe uncertainty about the RS model, we use a Model Error Modeling approach which drawsupon recent advances in the control system identification literature due to Ljung (1999). Wethen apply both Bayesian and minimax techniques to develop policy rules which are robustto the uncertainty that we describe. We focus for the most part on Taylor-type policy rules,in which the interest rate is set in response to inflation and the output gap. We also considersome more complex, less restrictive policy rules, and arrive at essentially the same results.Overall the RS model and the policy rules we study provide an empirically relevant, buttechnically simple, laboratory to illustrate the important features of our analysis.3

Recently there has been a great deal of research activity on monetary policy makingunder uncertainty. Unfortunately, the practical implications of this research turn out to bevery sensitive to different assumptions about uncertainty. For example, the classic analysisof Brainard (1967) showed that uncertainty about the parameters of a model may lead tocautious policy. More recently, Sargent (1999) showed that the introduction of extremeuncertainty about the shocks in the Ball (1999) model implies that very aggressive policyrules may be optimal. On the contrary, Rudebusch (2001) shows that focusing on the realtime data uncertainty in the conceptually similar RS model leads to the attenuation of theoptimal policy rule. Further, Craine (1979) and Söderström (2002) show that uncertaintyabout the dynamics of inflation leads to aggressive policy rules. Finally, Onatski and Stock(2002) find that uncertainty about the lag structure of the RS model requires a cautiousreaction to inflation, but an aggressive response to variation in the output gap.The fact that the robust policy rules are so fragile with respect to different assumptionsabout the structure of uncertainty is not surprising by itself. Fragility is a general featureof optimizing models. Standard stochastic control methods are robust to realizations ofshocks, as long as they come from the assumed distributions and feed through the model inthe specified way. But the optimal rules may perform poorly when faced with a differentshock distribution, or slight variation in the model. The policy rules discussed above aredesigned to be robust to a particular type of uncertainty, but may perform poorly whenfaced with uncertainty of a different nature. In our view, the most important message of thefragility of the robust rules is that to design a robust policy rule in practice, it is necessaryto combine different sources of uncertainty in a coherent structure and carefully estimate orcalibrate the size of the uncertainty. In other words, we must structurally model uncertainty.As described above, we assume that policymakers start with a reference model of theeconomy. At a general level, model uncertainty can be adequately represented by suitablespecial restrictions on the reference model’s shocks. For example, if one is uncertain about4

the parameters of the reference model or whether all relevant variables were included in themodel, one should suspect that the reference shocks might actually be correlated with theexplanatory variables in the model. That is, the reference model’s shocks would now include“true” exogenous shocks and modeling errors. The model uncertainty can be formulated bydefining a set of potentially true models for these errors, or by “Model Error Modeling.”One popular way to describe restrictions on the reference shocks (see for example Hansenand Sargent (2002)) is to assume that the shocks must be of bounded size, but arbitraryotherwise. We argue that a much more structured model of the shocks must be used todescribe uncertainty relevant to monetary policymaking. In particular, we develop an example showing that the Hansen and Sargent (2002) approach may lead to the design of robustpolicy rules that can be destabilized by small parametric perturbations. Thus while therobust rule may resist shocks of a certain size, small variations in the underlying model canresult in disastrous policy performance.We then turn to the task of formulating an empirical description of uncertainty by modelerror modeling. In particular, we discuss and implement both parametric and nonparametricspecifications for the RS model errors. The parametric specification imposes more structureand results in a probabilistic description of uncertainty. We estimate these parameters usingBayesian methods, obtaining a posterior distribution which characterizes the uncertainty.The nonparametric specification imposes fewer restrictions, and results in a deterministicspecification of the uncertainty. This allows us to calibrate the size of the uncertainty set,but as it is a deterministic description, we cannot evaluate the likelihood of alternativemodels in the set.After we estimate or calibrate the uncertainty, we use our results to formulate robust policy rules which are designed to work well for the measured uncertainty. From the parametricspecification, we have a distribution over possible models. Therefore for this specificationwe find robust optimal rules which minimize the Bayesian risk. From the nonparametric5

specification, we have bounds on the uncertainty set. Therefore for this specification we findrobust optimal rules which minimize the worst possible loss for the models in the set. Thisminimax approach follows much of the recent literature on robust control, and provides atractable way of using our most general uncertainty descriptions. While there is the possibility that minimax results may be driven by unlikely models, we focus solely on empiricallyplausible model perturbations. Further, for many of our specifications the Bayesian andminimax results are quite similar. This suggests both that the stronger restrictions in theBayesian framework do not greatly affect results, and that the minimax results are not drivenby implausible worst case scenarios. It is worth noting that in all of our results we assumethat policy makers commit to a rule once-and-for-all. Although this approach is common inthe literature, it is clearly an oversimplification. This should be kept in mind, particularlywhen considering some of the bad outcomes we find for certain policy rules.Without imposing much prior structure on the model perturbations, the parametricBayesian analysis finds some attenuation in policy. This is keeping with the Brainard (1967)intuition. However our nonparametric-minimax analysis finds that dynamic instability isa possibility for any policy rule. This suggests the potential for very large losses and verypoor economic performance when policy is conducted using such interest rate rules. Howeverwhen we tighten prior beliefs so that instability is deemed unlikely, our results change rathersubstantially. In this case, the optimal rule from the Bayesian analysis is slightly moreaggressive than the optimal rule in the absence of model uncertainty. However our minimaxoptimal rule is quite close to the no-uncertainty benchmark. But these rules remain relativelyaggressive in comparison with directly estimated policy rules.Upon further inspection, we find that in many cases the most damaging model perturbations come from very low frequency changes. Correspondingly, many of the robust policyrules that we find are relatively aggressive, stemming from policymakers’ fears of particularlybad long-run deviations from the RS model. In particular, we impose a vertical long-run6

Phillips curve. Thus increases in the output gap would lead to very persistent increases ininflation in the absence of a relatively aggressive interest rate rule. The size of this persistentcomponent is poorly measured, but has a huge impact on the losses sustained by the policymaker. However, the RS model is essentially model of short-run fluctuations, and is notdesigned to capture long-run phenomena. By asking such a simple model to accommodatevery low frequency perturbations, we feel that we are pushing the model too far. A morefully developed model would be necessary to capture low frequency behavior.Further, policy makers are arguably most interested in using monetary policy to smoothcyclical fluctuations, which justifies focusing on the business cycle properties of the model.In addition, for technical reasons we find a substantial divergence between our parametricand nonparametric uncertainty specifications at low frequencies. Thus we believe that forpractical purposes, it is prudent to downweight the importance of the low frequency movements. To tailor our uncertainty description to more relevant uncertainty descriptions, wereconsider our results when restricting attention to uncertainty at business cycle frequencies(corresponding to periods from 6 to 32 quarters).Interestingly, in this case the robust optimal policy rules are less aggressive than whenfacing uncertainty at all frequencies. This effect is largest for the minimax specification,which seeks to minimize the worst case. Faced with uncertainty at all frequencies, thisrequires relatively aggressive policy rules to guard against the possibility of inflation growingout of control. But when we introduce uncertainty at business cycle frequencies only, thenthe worst case scenarios occur at these frequencies, making policy very responsive to cyclicalfluctuation. This comes at the cost of downweighting low frequency movements. Insteadof fighting off any incipient inflation, policy becomes less aggressive, and focuses more oncounter-cyclical stabilization. This contrasts with policymakers worried about low frequencyperturbations, who may be reluctant to try to stimulate the economy in a recession. Thesame basic mechanism applies in the Bayesian case, but there policy minimizes the average7

loss across frequencies. Low frequency perturbations again imply more aggressive policy, butthese perturbations are given much less weight when choosing policy rules to minimize theBayesian risk. Thus the effects of removing low frequency perturbations is much smaller.One of the main benefits of our approach is that it allows us to treat many different formsof uncertainty in a unified framework. However it is also interesting to consider the differentsources independently. This allows us to see how the uncertainty channels affect policy rules,and to determine which channels have the largest effects on losses. These results can provideuseful information for users of similar models, by pointing out the most important parts ofthe model specification. Echoing our discussion of the fragility of robust rules above, we findthat the different channels have rather different effects. Uncertainty about the parametersand the lag structure is likely the most important channel. It turns out that many of theempirically plausible perturbations in this case make the model easier to control, so theresulting Bayesian rules are attenuated and lead to smaller losses. However for all policyrules, we find that instability is possible under our nonparametric calibration, suggestinga disastrous worst case. We also find that real time data uncertainty may have significanteffects on optimal policy rules and their performance. When we restrict our attention tobusiness cycle frequencies, we again find that most of the policy rules become attenuated.In the next section of the paper we describe the framework for our analysis at a generallevel. In Section 3 we present an example highlighting the importance of the model ofuncertainty, and show that parametric and shock uncertainty must be considered separately.Section 4 describes our application of the Model Error Modeling approach to find bothparametric and nonparametric measures of the uncertainty associated with the RudebuschSvensson model. Section 5 formulates robust monetary policy rules based on our uncertaintydescriptions. Section 6 concludes.8

2General FrameworkThe general issue that we consider in this paper is decision making under model uncertainty.In particular, we focus on the policy-relevant problem of choosing interest rate rules when thetrue model of the economy is unknown and may be subject to different sources of uncertainty.The goal of the paper is to provide characterizations of the empirically relevant sources ofuncertainty, and to design policy rules which account for that uncertainty.The starting point of our analysis is a reference model of the economy:xt 1 A(L)xt B1 (L)ut B2 (L)εtyt C(L)xt D(L)εt ,(1)(2)where xt is a vector of macroeconomic indicators, ut is a vector of controls such as taxes,money, or interest rates, yt is a vector of variables observed in real time, εt is a vector ofwhite noise shocks, and A(L), Bi (L), C(L), and D(L) are matrix lag polynomials. Note thatthe majority of purely backward-looking models of the economy can be represented in theabove form. In fact, by defining the state appropriately, this system of equations has astandard state-space form. We consider this form of the reference model because, as willsoon be clear, it accords with our description of the uncertainty.As mentioned in the introduction, we assume that through some unmodeled process oftrial and error policy makers have arrived at a reference model of the economy. In this paper,we do not address an important question of how to choose a reference model. Instead, weassume that the reference model is given, and policy makers are concerned about smalldeviations of the true model from the reference one. This is also the starting point ofmuch of the literature on robustness in economics, as described for example in Hansen andSargent (2002). A more ambitious question of what policy a central bank should follow9

under vast disagreement about the true model of the economy is addressed, for example inLevin, Wieland, and Williams (1999).We assume that policymakers have a time-additively separable quadratic loss function:Lt Et Xβ i x0t i Λxt i .i 0They seek to minimize losses by choosing a policy rule from an admissible class:ut f (yt , yt 1 , ., ut 1 , ut 2 , .).The admissible class does not necessarily include the optimal control because the optimalityof a rule may be traded off with its other characteristics, such as simplicity. In some casesit is more convenient to discuss policymakers maximizing a utility function, which is simplythe negative of the loss function.Equations (1) and (2) can be estimated for a time period in the past for which bothreal-time data yt , ut and the final data xt are available. The obtained estimates can then beused to compute the best policy rule from the admissible class. The quality of the policyrule obtained in this way will depend on the accuracy of the reference model. In general,this model will not be completely accurate. The reference model is likely to be a stylizedmacroeconomic model, which for tractability may leave out certain variables or focus onlyon the first few lags of the relevant variables. While these simplifications may be justifiedfor both practical and statistical reasons, we will show that they can have a large impact onpolicy decisions.We assume that a more accurate model of the economy encompasses the reference model10

as follows:xt 1yt³ ³ ³ A(L) Ã(L) xt B1 (L) B̃1 (L) ut B2 (L) B̃2 (L) εt³ ³ C(L) C̃(L) xt D(L) D̃(L) εt ,(3)(4)where Ã(L), B̃i (L), C̃(L) and D̃(L) are relatively unconstrained matrix lag polynomials ofpotentially infinite order. Our assumption allows for a rich variety of potential defectsin the reference model. Econometric errors in the estimation of the reference parameters,misspecifications of the lag structure of the reference equations, and misinterpretations ofthe real-time data are all considered as distinct possibilities.We assume that the central bank wants to design a policy rule that works well notonly for the reference model but also for statistically plausible deviations from the referencemodel having form (3,4). Formally, such a set can be defined by a number of restrictionsR on the matrix lag polynomials Ã(L), B̃i (L), C̃(L) and D̃(L). The restrictions R may bedeterministic if sets of the admissible matrix lag polynomials are specified, or stochastic ifdistributions of the polynomials’ parameters are given.We formalize policy makers’ desire for robustness by assuming that they use Bayesian orminimax strategy for choosing the policy, depending on whether R is stochastic or deterministic. That is, in the stochastic case policy makers solve the Bayes problem:min ER Lt{ut f (·)}(5)where the expectation is taken with respect to distributions of the potential deviations fromthe reference model specified by R. In the deterministic case, they solve the minimaxproblem:min max Lt{ut f (·)}11R(6)

where the maximum is taken over all matrix lag polynomials Ã(L), B̃i (L), C̃(L) and D̃(L)satisfying the deterministic restrictions R.1It is needless to say that, at least in principle, the particular structure of the restrictionsR will strongly affect solutions to the above problems. In the next section, we illustrateimportance of this structure through a simple example.3Consequences of Different Uncertainty ModelsIt is useful to re-write (3)-(4) to represent the model uncertainty in the form:xt 1 A(L)xt B1 (L)ut wtyt C(L)xt st ,where we define the “model errors” as:³wtst Ã(L)xt B̃1 (L)ut B2 (L) B̃2 (L) εt ,³ C̃(L)xt D(L) D̃(L) εt ,(7)and Ã(L), B̃i (L), C̃(L) and D̃(L) comply with R. This representation shows that, the uncertainty may be described by restrictions (7) on the model errors wt and st .One approach to model uncertainty, similar in spirit to that developed by Hansen andSargent (2002), does not impose any special structure on wt and st . Instead, the approach1Note that in our formulation, the model uncertainty takes form of a one-time uncertain shift in theparameters or specification of the reference model. For an analysis of uncertainty interpreted as a stochasticprocess in the space of models see Rudebusch (2001).12

considers all errors subject to the restriction:E Xβ t (wt0 Φ1 wt s0t Φ2 st ) η.(8)t 0The parameter η in the above inequality regulates the size of uncertainty, and it may becalibrated so that the corresponding deviations from the reference model are statisticallyplausible. While this approach seems quite general and unrestrictive, not taking into accountthe particular structure of wt and st may seriously mislead decision makers. We now developan example illustrating this fact. The example considers a practically important situation,although in later sections we slightly change the policy rules and the loss function we consider.We consider a two-equation purely backward-looking model of the economy proposed andestimated by Rudebusch and Svensson (1999). This model is the benchmark for the rest ofthe paper as well, and is given by:πt 1 .70 πt .10 πt 1 .28 πt 2 .12 πt 3 .14 yt επ,t 1(.08)(.10)(.10)(.08)(.03)(9)yt 1 1.16yt .25 yt 1 .10 (ı̄t π̄t ) εy,t 1(.08)(.08)(.03)The standard errors of the parameter estimates are given in parentheses. Here the variabley stands for the gap between output and potential output, π is inflation and i is the federalfunds rate. All the variables are quarterly, measured in percentage points at an annual rateand demeaned prior to estimation, so there are no constants in the equations. The variablesπ and i stand for four-quarter averages of inflation and the federal funds rate respectively.The first equation is a simple version of the Phillips curve, relating the output gap andinflation. The coefficients on the lags of inflation in the right hand side of the equation sumto one, so that the Phillips curve is vertical in the long run. The second equation is a variantof the IS curve, relating the real interest rate to the output gap. A policymaker can control13

the federal funds rate and wants to do so in order to keep y and π close to their target values(zero in this case). For the present, we ignore the real-time data issues so that our referencemodel does not include equations describing real-time data generating process.In general, the policy maker’s control policy may take the form of a contingency plan forher future settings of the federal funds rate. Here we restrict attention to the Taylor-typerules for the interest rate. As emphasized by McCallum (1988) and Taylor (1993), simplerules have the advantage of being easy for policymakers to follow and easy to interpret. Inour analysis in later sections, we consider simple rules but we also analyze the performanceof feedback rules of a more general form. In this section, we assume that the policymakerchooses among the following rules:it gπ π̄t 1 gy yt 2(10)Here, the interest rate reacts to both inflation and the output gap with delay. The delay inthe reaction to the output gap is longer than that in the reaction to the inflation becauseit takes more time to accurately estimate the gap. The timing in the above policy rule isunorthodox, and is made here to sharpen our results. In later sections we use the moreconventional timing, in which the interest rate responds contemporaneously to inflation andthe output gap, and we also consider more general policy rules.Following Rudebusch and Svensson (1999), we assume here that the policy maker hasthe quadratic loss function:1Lt π̄t2 yt2 (it it 1 )2 .2(11)The inclusion of the interest-smoothing term (it it 1 )2 in the loss function is somewhatcontroversial. Our results will not depend on whether this term is included in the loss function14

or not, but we keep it here to again sharpen our results. In later sections we assume, as inWoodford (2002), that the loss function depends on the level of the interest rate, not thechanges in rates.If the policy maker were sure that the model is correctly specified, she could use standardmethods to estimate the expected loss for any given policy rule (10). Then she could find theoptimal rule numerically. Instead, we assume that the policy maker has some doubts aboutthe model. She wants therefore to design her control so that it works well for reasonabledeviations from the original specification. One of the most straightforward ways to representher doubts is to assume that the model parameters may deviate from their point estimatesas, for example, is assumed in Brainard (1967). It is also likely, that the policy maker wouldnot rule out misspecifications of the model’s lag structure. As Blinder (1997) states, “Failureto take proper account of lags is, I believe, one of the main sources of central bank error.”For the sake of illustration, we assume that the policy maker contemplates the possibilitythat one extra lag of the output gap in the Phillips curve and IS equations and one extralag of the real interest rate in the IS equation were wrongfully omitted from the originalmodel. She therefore re-estimates the Rudebusch-Svensson model with the additional lags.The re-estimated model has the following form:πt 1 .70 πt .10 πt 1 .28 πt 2 .12 πt 3 .14 yt .00 yt 1 επ,t 1(.08)(.10)(.10)(.09)(.10)(.10)(12)yt 1 1.13yt .08 yt 1 .14 yt 2 .32 (ı̄t π̄t ) .24 (ı̄t 1 π̄t 1 ) εy,t 1(.08)(.12)(.08)(.14)(.14)Then she obtains the covariance matrix of the above point estimates and tries to designher control so that it works best for the worst reasonable deviation of the parameters fromthe point estimates. For example, she may consider all parameter values inside the 50%confidence ellipsoid around the point estimates.22In the later sections of the paper we discuss a more systematic way of representing and estimating the15

We will soon return to this problem, but for now let us give an alternative, less structured,description of the uncertainty. In general, we can represent uncertainty by modeling theerrors w1t , w2t of the Phillips curve and the IS equations as any processes satisfying:E Xµβtt 022w2tw1t Var(επt ) Var(εyt )¶ η.Here we will consider the case β 1. The special choice of the weights on errors to thePhillips curve and the IS equations was made to accommodate the MATLAB codes that weuse in our calculations.In the extreme case when η tends to infinity, our uncertainty will be very large, so thecorresponding robust (minimax) rule must insure the policy maker against a large varietyof deviations from the reference model. It can be shown that such an “extremely robust”policy rule minimizes the so-called H norm of the closed loop system transforming the noise³ pp 0¡εt επt / Var(επt ), εyt / Var(εyt ) into the target variables zt πt , yt , (it it 1 )/ 2(see Hansen and Sargent (2002)). It is therefore easy to find such a rule numerically using, for example, commercially available MATLAB codes to compute the H norm. Ourcomputations give the following rule:it 3.10π̄t 1 1.41yt 2 .(13)Now let us return to our initial formulation of the problem. Recall that originally wewanted to find a policy rule that works well for all deviations of the parameters of the reestimated model (12) inside a 50% confidence ellipsoid around the point estimates. Somewhatsurprisingly, the above “extremely robust” rule does not satisfy our original criterion forrobustness. In fact, it destabilizes the economy for deviations from the parameters’ pointmodel uncertainty. We also do not restrict our attention to the minimax setting as we do in this section.16

estimates inside as small as a 20% confidence ellipsoid. More precisely, the policy rule (13)results in infinite expected loss for the following perturbation of the Rudebusch-Svensson(RS) model:πt 1 .68πt .13πt 1 .35πt 2 .10πt 3 .30yt .15yt 1 επ,t 1(14)yt 1 1.15yt .07yt 1 .18yt 2 .51 (ı̄t π̄t ) .41 (ı̄t 1 π̄t 1 ) εy,t 1 .Let us denote the independent coefficients of the above model, the re-estimated RS model(12), and the original RS model as c, c1 , and c0

structural modeling of the uncertainty relevant for policymaking. We start by supposing that through some process of theorizing and data analysis, poli-cymakers have arrived at a reference model of the economy. They want to use this model to set policy, but are concerned about potential uncertain deviations from it. There are three

Related Documents: