Optimal Control is a modern development of the calculus of variations and classical optimization theory. For this reason, this introduction to the theory of Optimal Control starts by considering the problem of minimizing a function of many variables. It moves from there, via an exposition of the calculus of variations, to the main subject which is the optimal control of systems governed by ordinary differential equations. This approach should enable the student to see the essential unity of the three important areas of mathematics, and also allow Optimal Control and the Pontryagin Maximum Principle to be placed in a proper context. A good knowledge of analysis, algebra, and methods, similar to that of a diligent British undergraduate at the start of the final year, is assumed. All the theorems are carefully proved, and there are many worked examples and exercises for the student. Although this book is written for the undergraduate mathematician, engineers and scientists with a taste for mathematics will find this a useful text.
Sprache
Verlagsort
Zielgruppe
Für höhere Schule und Studium
Für Beruf und Forschung
Illustrationen
ISBN-13
978-0-19-853217-0 (9780198532170)
Copyright in bibliographic data is held by Nielsen Book Services Limited or its licensors: all rights reserved.
Schweitzer Klassifikation
Part 1 Introduction: the maxima and minima of functions; the calculus of variations; optimal control. Part 2 Optimization in Rn: functions of one variable; critical points, end-points, and points of discontinuity; functions of several variables; minimization with constraints; a geometrical interpretation; distinguishing maxima from minima. Part 3 The calculus of variations: problems in which the end-points are not fixed; finding minimizing curves; isoperimetric problems; sufficiency conditions; fields of extremals; Hilbert's invariant integral; semi-fields and the Jacobi condition. Part 4 Optimal control 1 - theory: control of a simple first-order system; systems governed by ordinary differential equations; the optimal control problem; the Pontryagin maximum principle; optimal control to target curves. Part 5 Optimal control 2 - applications: time-optimal control of linear systems; optimal control to target curves; singular controls; fuel-optimal control; problems where the cost depends on x(tl); linear systems with quadratic cost; the steady-state Riccati equation; the calculus of variations revisited. Part 6 Proof of the maximum principle of Pontryagin: convex sets in Rn; the linearized state equations; the behaviour of H on an optimal path; sufficiency conditions for optimal control. Part 7 Answers and hints for the exercises.