The optimal control of a Lévy process
MetadataShow full item record
In this thesis we study the optimal stochastic control problem of the drift of a Lévy process. We show that, for a broad class of Lévy processes, the partial integro-differential Hamilton-Jacobi-Bellman equation for the value function admits classical solutions and that control policies exist in feedback form. We then explore the class of Lévy processes that satisfy the requirements of the theorem, and find connections between the uniform integrability requirement and the notions of the score function and Fisher information from information theory. Finally we present three different numerical implementations of the control problem: a traditional dynamic programming approach, and two iterative approaches, one based on a finite difference scheme and the other on the Fourier transform.