Continuous-time speed for discrete-time models: A Markov-chain approximation method | Munich Center for the Economics of Aging - MEA
Home

Publications

Publication Series of the MPI for Social Policy

Continuous-time speed for discrete-time models: A Markov-chain approximation method

Content

We propose a Markov-chain approximation method for discrete-time control problems, showing how to reap the speed gains from continuous-time algorithms in this class of models. Our approach specifies a discrete Markov chain on a grid, taking a first-order approximation of conditional distributions in their first and second moments around a reference point. Standard dynamic-programming results guarantee convergence. We show how to apply our method to standard consumption-savings problems with and without a portfolio choice, realizing speed gains of up to two orders of magnitude (a factor 100) with respect to state-of-the-art methods, when using the same number of grid points. This is without significant loss of precision. We show how to avoid the curse of dimensionality and keep computation times manageable in high-dimensional problems with independent shocks. Finally, we show how our approach can substantially simplify the computation of dynamic games with a large state space, solving a discrete-time version of the altruistic savings game studied by Barczyk & Kredler (2014).

Publication Details
csm_201910_Ivo_Bakota_400x500_bf7b438a9c

Ivo Bakota

mpisoc-user-default

Matthias Kredler

2022
Max Planck Institute for Social Law and Social Policy, Munich Center for the Economics of Aging (MEA)
Munich
View