We analyze a stochastic optimal control problem, where the state process follows a McKean-Vlasov dynamics and the diffusion coefficient can be degenerate. We prove that its value function V admits a nonlinear Feynman-Kac representation in terms of a class of forward-backward stochastic differential equations, with an autonomous forward process. We exploit this probabilistic representation to rigorously prove the dynamic programming principle (DPP) for V. The Feynman-Kac representation we obtain has an important role beyond its intermediary role in obtaining our main result: in fact it would be useful in developing probabilistic numerical schemes for V. The DPP is important in obtaining a characterization of the value function as a solution of a nonlinear partial differential equation (the so-called Hamilton-Jacobi-Belman equation), in this case on the Wasserstein space of measures. We should note that the usual way of solving these equations is through the Pontryagin maximum principle, which requires some convexity assumptions. There were attempts in using the dynamic programming approach before, but these works assumed a priori that the controls were of Markovian feedback type, which helps write the problem only in terms of the distribution of the state process (and the control problem becomes a deterministic problem). In this paper, we will consider open-loop controls and derive the dynamic programming principle in this most general case. In order to obtain the Feynman-Kac representation and the randomized dynamic programming principle, we implement the so-called randomization method, which consists of formulating a new McKean-Vlasov control problem, expressed in weak form taking the supremum over a family of equivalent probability measures. One of the main results of the paper is the proof that this latter control problem has the same value function V of the original control problem.
Bayraktar, E., Cosso, A., Pham, H. (2018). Randomized dynamic programming principle and Feynman-Kac representation for optimal control of McKean-Vlasov dynamics. TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY, 370(3), 2115-2160 [10.1090/tran/7118].
Randomized dynamic programming principle and Feynman-Kac representation for optimal control of McKean-Vlasov dynamics
Cosso, Andrea;
2018
Abstract
We analyze a stochastic optimal control problem, where the state process follows a McKean-Vlasov dynamics and the diffusion coefficient can be degenerate. We prove that its value function V admits a nonlinear Feynman-Kac representation in terms of a class of forward-backward stochastic differential equations, with an autonomous forward process. We exploit this probabilistic representation to rigorously prove the dynamic programming principle (DPP) for V. The Feynman-Kac representation we obtain has an important role beyond its intermediary role in obtaining our main result: in fact it would be useful in developing probabilistic numerical schemes for V. The DPP is important in obtaining a characterization of the value function as a solution of a nonlinear partial differential equation (the so-called Hamilton-Jacobi-Belman equation), in this case on the Wasserstein space of measures. We should note that the usual way of solving these equations is through the Pontryagin maximum principle, which requires some convexity assumptions. There were attempts in using the dynamic programming approach before, but these works assumed a priori that the controls were of Markovian feedback type, which helps write the problem only in terms of the distribution of the state process (and the control problem becomes a deterministic problem). In this paper, we will consider open-loop controls and derive the dynamic programming principle in this most general case. In order to obtain the Feynman-Kac representation and the randomized dynamic programming principle, we implement the so-called randomization method, which consists of formulating a new McKean-Vlasov control problem, expressed in weak form taking the supremum over a family of equivalent probability measures. One of the main results of the paper is the proof that this latter control problem has the same value function V of the original control problem.File | Dimensione | Formato | |
---|---|---|---|
tran7118_AM.pdf
accesso aperto
Tipo:
Postprint
Licenza:
Licenza per Accesso Aperto. Creative Commons Attribuzione - Non commerciale - Non opere derivate (CCBYNCND)
Dimensione
642.04 kB
Formato
Adobe PDF
|
642.04 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.