Multiple-shooting differential dynamic programming with applications to spacecraft trajectory optimization
MetadataShow full item record
The optimization of spacecraft trajectories has been, and continues to be, critical for the development of modern space missions. Longer flight times, continuous low-thrust propulsion, and multiple flybys are just a few of the modern features resulting in increasingly complex optimal control problems for trajectory designers to solve. In order to efficiently tackle such challenging problems, a variety of methods and algorithms have been developed over the last decades. The work presented in this dissertation aims at improving the solutions and the robustness of the optimal control algorithms, in addition to reducing their computational load and the amount of necessary human involvement. Several areas of improvement are examined in the dissertation. First, the general formulation of a Differential Dynamic Programming (DDP) algorithm is examined, and new theoretical developments are made in order to achieve a multiple-shooting formulation of the method. Multiple-shooting transcriptions have been demonstrated to be beneficial to both direct and indirect optimal control methods, as they help decrease the large sensitivities present in highly nonlinear problems (thus improving the algorithms' robustness), and increase the potential for a parallel implementation. The new Multiple-Shooting Differential Dynamic Programming algorithm (MDDP) is the first application of the well-known multiple-shooting principles to DDP. The algorithm uses a null-space trust-region method for the optimization of quadratic subproblems subject to simple bounds, which permits to control the quality of the quadratic approximations of the objective function. Equality and inequality path and terminal constraints are treated with a general Augmented Lagrangian approach. The choice of a direct transcription and of an Augmented Lagrangian merit function, associated with automated partial computations, make the MDDP implementation flexible, requiring minimal user effort for changes in the dynamics, cost and constraint functions. The algorithm is implemented in a general, modular optimal control software, and the performance of the multiple-shooting formulation is analyzed. The use of quasi-Newton approximations in the context of DDP is examined, and numerically demonstrated to improve computational efficiency while retaining attractive convergence properties. The computational performance of an optimal control algorithm is closely related to that of the integrator chosen for the propagation of the equation of motion. In an effort to improve the efficiency of the MDDP algorithm, a new numerical propagation method is developed for the Kepler, Stark, and three-body problems, three of the most commonly used dynamical models for spacecraft trajectory optimization. The method uses a time regularization technique, the generalized Sundman transformation, and Taylor Series developments of equivalents to the f and g functions for each problem. The performance of the new method is examined, and specific domains where the series solution outperforms existing propagation methods are identified. Finally, because the robustness and computational efficiency of the MDDP algorithm depend on the quality of the first- and second-order State Transition Matrices, the three most common techniques for their computation are analyzed, in particular for low-fidelity propagation. The propagation of variational equations is compared to the complex step derivative approximation and finite differences methods, for a variety of problems and integration techniques. The subtle differences between variable- and fixed-step integration for partial computation are revealed, common pitfalls are observed, and recommendations are made for the practitioner to enhance the quality of state transition matrices.