Ir al contenido

Documat


Dynamic programming for a Markov-switching jump�diffusion

  • Autores: N. Azevedo, Diogo Pinheiro, G.- W. Weber
  • Localización: Journal of computational and applied mathematics, ISSN 0377-0427, Vol. 267, Nº 1, 2014, págs. 1-19
  • Idioma: inglés
  • DOI: 10.1016/j.cam.2014.01.021
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • We consider an optimal control problem with a deterministic finite horizon and state variable dynamics given by a Markov-switching jump�diffusion stochastic differential equation. Our main results extend the dynamic programming technique to this larger family of stochastic optimal control problems. More specifically, we provide a detailed proof of Bellman�s optimality principle (or dynamic programming principle) and obtain the corresponding Hamilton�Jacobi�Belman equation, which turns out to be a partial integro-differential equation due to the extra terms arising from the Lévy process and the Markov process. As an application of our results, we study a finite horizon consumption� investment problem for a jump�diffusion financial market consisting of one risk-free asset and one risky asset whose coefficients are assumed to depend on the state of a continuous time finite state Markov process. We provide a detailed study of the optimal strategies for this problem, for the economically relevant families of power utilities and logarithmic utilities.


Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno