Application of automatic differentiation to trajectory optimization via direct multiple shooting

Access full-text files




Garza, David Marcelo

Journal Title

Journal ISSN

Volume Title



Automatic differentiation, also called computational differentiation and algorithmic differentiation, is the process of computing the derivatives or Taylor series of functions from the computer source code implementing the functions. To date, general-purpose trajectory optimization codes have relied on finite-differencing to compute the gradients needed by the nonlinear programming (NLP) algorithms within the codes. These codes typically support the selection of an arbitrary objective and constraint set from a library of a few hundred output variables. The use of automatic differentiation in these trajectory optimization programs can provide objective and constraint gradients to the same precision as the underlying functions without requiring the generation of hundreds of analytic derivative expressions by hand or via symbolic algebra packages. This work combines automatic differentiation with a direct multiple shooting method and uses the resulting method to solve a pair of example problems. The first is the well-known lunar launch problem, while the second is a launch vehicle ascent problem similar in complexity to that which would be computed by a program such as the Program to Optimize Simulated Trajectories (POST) for use in vehicle design studies. Results include comparisons of convergence behavior of the NLP problem and solution accuracy. Tests comparing the use of Euler angles versus quaternion elements as control variables demonstrate the versatility of automatic differentiation. For loose convergence levels automatic differentiation provided faster convergence than finite differencing on the launcher ascent problem. For tight accuracy requirements, automatic differentiation resulted in fewer major iterations on the lunar launch problem.