Research


Working Papers

  • Estimating Robustness
    [PDF, September 2017]  [online_appendix] [code]

    I estimate and evaluate a model with a representative agent who is concerned that the persistence properties of her baseline model of consumption and inflation are misspecified. Coping with model uncertainty, she discovers a pessimistically biased worst-case model that dictates her behavior. I combine interest rates and aggregate macro series with cross-equation restrictions implied by robust control theory to estimate this worst-case distribution and show that (1) the model’s predictions about key features of the yield curve are in line with the data, and (2) the degree of pessimism underlying these findings is plausible. Interpreting the worst-case as the agent’s subjective belief, I derive model implied interest rate forecasts and compare them with analogous survey expectations. I find that the model can replicate the dynamics and average level of bias found in the survey.


  • Twisted Probabilities, Uncertainty, and Prices
    (
    with Lars Peter Hansen, Thomas J. Sargent, and Lloyd Han)
    [PDF, December 2018]

    A decision maker constructs a convex set of nonnegative martingales to use as likelihood ratios that represent alternatives that are statistically close to a decision maker’s baseline model. The set is twisted to include some specific models of interest. Maxmin expected utility over that set gives rise to equilibrium prices of model uncertainty expressed as worst-case distortions to drifts in a representative investor’s baseline model. Three quantitative illustrations start with baseline models having exogenous long-run risks in technology shocks. These put endogenous long-run risks into consumption dynamics that differ in details that depend on how shocks affect returns to capital stocks. We describe sets of alternatives to a baseline model that generate countercyclical prices of uncertainty.


  • Learning with Misspecified Models
    (
    with Daniel Csaba)
    [PDF, November 2018]

We consider Bayesian learning about a stable environment when the learner’s entertained probability distributions (likelihoods) are all misspecified. We evaluate likelihoods according to the long-run average payoff of the policy function they induce. We then show that, generically, the value that the Bayesian learner attains in the long run is lower than what would be achievable with her misspecified set of likelihoods. We introduce two kinds of indifference curves over the learner’s set: one based on the likelihoods’ induced long-run average payoff, and another capturing their statistical similarity. In case of misspecification, we show that misalignment of these curves can lead the Bayesian learner to focus on payoff-irrelevant features of the environment. On the other hand, under correct specification this misalignment has no bite. We provide conditions under which it is feasible to construct an exponential family that allows the learner to implement the best attainable policy in the long-run irrespective of misspecification. We demonstrate applications of the introduced concepts through examples.