Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
Preface ix
1 Introduction 1
1.1 Introduction to Iterative Learning Control 1
1.1.1 Contraction-Mapping Approach 3
1.1.2 Composite Energy Function Approach 4
1.2 Introduction to MAS Coordination 5
1.3 Motivation and Overview 7
1.4 Common Notations in This Book 9
2 Optimal Iterative Learning Control for Multi-agent Consensus Tracking 11
2.1 Introduction 11
2.2 Preliminaries and Problem Description 12
2.2.1 Preliminaries 12
2.2.2 Problem Description 13
2.3 Main Results 15
2.3.1 Controller Design for Homogeneous Agents 15
2.3.2 Controller Design for Heterogeneous Agents 20
2.4 Optimal Learning Gain Design 21
2.5 Illustrative Example 23
2.6 Conclusion 26
3 Iterative Learning Control for Multi-agent Coordination Under Iteration-Varying Graph 27
3.1 Introduction 27
3.2 Problem Description 28
3.3 Main Results 29
3.3.1 Fixed Strongly Connected Graph 29
3.3.2 Iteration-Varying Strongly Connected Graph 32
3.3.3 Uniformly Strongly Connected Graph 37
3.4 Illustrative Example 38
3.5 Conclusion 40
4 Iterative Learning Control for Multi-agent Coordination with Initial State Error 41
4.1 Introduction 41
4.2 Problem Description 42
4.3 Main Results 43
4.3.1 Distributed D-type Updating Rule 43
4.3.2 Distributed PD-type Updating Rule 48
4.4 Illustrative Examples 49
4.5 Conclusion 50
5 Multi-agent Consensus Tracking with Input Sharing by Iterative Learning Control 53
5.1 Introduction 53
5.2 Problem Formulation 54
5.3 Controller Design and Convergence Analysis 54
5.3.1 Controller Design Without Leader's Input Sharing 55
5.3.2 Optimal Design Without Leader's Input Sharing 58
5.3.3 Controller Design with Leader's Input Sharing 59
5.4 Extension to Iteration-Varying Graph 60
5.4.1 Iteration-Varying Graph with Spanning Trees 60
5.4.2 Iteration-Varying Strongly Connected Graph 60
5.4.3 Uniformly Strongly Connected Graph 62
5.5 Illustrative Examples 63
5.5.1 Example 1: Iteration-Invariant Communication Graph 63
5.5.2 Example 2: Iteration-Varying Communication Graph 64
5.5.3 Example 3: Uniformly Strongly Connected Graph 66
5.6 Conclusion 68
6 A HOIM-Based Iterative Learning Control Scheme for Multi-agent Formation 69
6.1 Introduction 69
6.2 Kinematic Model Formulation 70
6.3 HOIM-Based ILC for Multi-agent Formation 71
6.3.1 Control Law for Agent 1 72
6.3.2 Control Law for Agent 2 74
6.3.3 Control Law for Agent 3 75
6.3.4 Switching Between Two Structures 78
6.4 Illustrative Example 78
6.5 Conclusion 80
7 P-type Iterative Learning for Non-parameterized Systems with Uncertain Local Lipschitz Terms 81
7.1 Introduction 81
7.2 Motivation and Problem Description 82
7.2.1 Motivation 82
7.2.2 Problem Description 83
7.3 Convergence Properties with Lyapunov Stability Conditions 84
7.3.1 Preliminary Results 84
7.3.2 Lyapunov Stable Systems 86
7.3.3 Systems with Stable Local Lipschitz Terms but Unstable Global Lipschitz Factors 90
7.4 Convergence Properties in the Presence of Bounding Conditions 92
7.4.1 Systems with Bounded Drift Term 92
7.4.2 Systems with Bounded Control Input 94
7.5 Application of P-type Rule in MAS with Local Lipschitz Uncertainties 97
7.6 Conclusion 99
8 Synchronization for Nonlinear Multi-agent Systems by Adaptive Iterative Learning Control 101
8.1 Introduction 101
8.2 Preliminaries and Problem Description 102
8.2.1 Preliminaries 102
8.2.2 Problem Description for First-Order Systems 102
8.3 Controller Design for First-Order Multi-agent Systems 105
8.3.1 Main Results 105
8.3.2 Extension to Alignment Condition 107
8.4 Extension to High-Order Systems 108
8.5 Illustrative Example 113
8.5.1 First-Order Agents 114
8.5.2 High-Order Agents 115
8.6 Conclusion 118
9 Distributed Adaptive Iterative Learning Control for Nonlinear Multi-agent Systems with State Constraints 123
9.1 Introduction 123
9.2 Problem Formulation 124
9.3 Main Results 127
9.3.1 Original Algorithms 127
9.3.2 Projection Based Algorithms 135
9.3.3 Smooth Function Based Algorithms 138
9.3.4 Alternative Smooth Function Based Algorithms 141
9.3.5 Practical Dead-Zone Based Algorithms 145
9.4 Illustrative Example 147
9.5 Conclusion 174
10 Synchronization for Networked Lagrangian Systems under Directed Graphs 175
10.1 Introduction 175
10.2 Problem Description 176
10.3 Controller Design and Performance Analysis 177
10.4 Extension to Alignment Condition 183
10.5 Illustrative Example 184
10.6 Conclusion 188
11 Generalized Iterative Learning for Economic Dispatch Problem in a Smart Grid 189
11.1 Introduction 189
11.2 Preliminaries 190
11.2.1 In-Neighbor and Out-Neighbor 190
11.2.2 Discrete-Time Consensus Algorithm 191
11.2.3 Analytic Solution to EDP with Loss Calculation 192
11.3 Main Results 193
11.3.1 Upper Level: Estimating the Power Loss 194
11.3.2 Lower Level: Solving Economic Dispatch Distributively 194
11.3.3 Generalization to the Constrained Case 197
11.4 Learning Gain Design 198
11.5 Application Examples 200
11.5.1 Case Study 1: Convergence Test 201
11.5.2 Case Study 2: Robustness of Command Node Connections 202
11.5.3 Case Study 3: Plug and Play Test 203
11.5.4 Case Study 4: Time-Varying Demand 205
11.5.5 Case Study 5: Application in Large Networks 207
11.5.6 Case Study 6: Relation Between Convergence Speed and Learning Gain 207
11.6 Conclusion 208
12 Summary and Future Research Directions 209
12.1 Summary 209
12.2 Future Research Directions 210
12.2.1 Open Issues in MAS Control 210
12.2.2 Applications 214
Appendix A Graph Theory Revisit 223
Appendix B Detailed Proofs 225
B.1 HOIM Constraints Derivation 225
B.2 Proof of Proposition 2.1 226
B.3 Proof of Lemma 2.1 227
B.4 Proof of Theorem 8.1 229
B.5 Proof of Corollary 8.1 230
Bibliography 233
Index 245
Iterative learning control (ILC), as an effective control strategy, is designed to improve current control performance for unpredictable systems by fully utilizing past control experience. Specifically, ILC is designed for systems that complete tasks over a fixed time interval and perform them repeatedly. The underlying philosophy mimics the human learning process that "practice makes perfect." By synthesizing control inputs from previous control inputs and tracking errors, the controller is able to learn from past experience and improve current tracking performance. ILC was initially developed by Arimoto et al. (1984), and has been widely explored by the control community since then (Moore, 1993; Bien and Xu, 1998; Chen and Wen, 1999; Longman, 2000; Norrlof and Gunnarsson, 2002; Xu and Tan, 2003; Bristow et al. 2006; Moore et al. 2006; Ahn et al. 2007a; Rogers et al. 2007; Ahn et al. 2007b; Xu et al. 2008; Wang et al. 2009, 2014).
Figure 1.1 shows the schematic diagram of an ILC system, where the subscript i denotes the iteration index and yd denotes the reference trajectory. Based on the input signal, ui, at the i th iteration, as well as the tracking error , the input for the next iteration, namely the th iteration, is constructed. Meanwhile, the input signal will also be stored into memory for use in the th iteration. It is important to note that in Figure 1.1, a closed feedback loop is formed in the iteration domain rather than the time domain. Compared to other control methods such as proportional-integral-derivative (PID) control and sliding mode control, there are a number of distinctive features about ILC. First, ILC is designed to handle repetitive control tasks, while other control techniques don't typically take advantage of task repetition-under a repeatable control environment, repeating the same feedback would yield the same control performance. In contrast, by incorporating learning, ILC is able to improve the control performance iteratively. Second, the control objective is different. ILC aims at achieving perfect tracking over the whole operational interval. Most control methods aim to achieve asymptotic convergence in tracking accuracy over time. Third, ILC is a feedforward control method if viewed in the time domain. The plant shown in Figure 1.1 is a generalized plant, that is, it can actually include a feedback loop. ILC can be used to further improve the performance of the generalized plant. As such, the generalized plant could be made stable in the time domain, which is helpful in guaranteeing transient response while learning takes place. Last but not least, ILC is a partially model-free control method. As long as an appropriate learning gain is chosen, perfect tracking can be achieved without using a perfect plant model.
Figure 1.1 The framework of ILC.
Generally speaking, there are two main frameworks for ILC, namely contraction-mapping (CM)-based and composite energy function (CEF)-based approaches. A CM-based iterative learning controller has a very simple structure and is easy to implement. A correction term in the controller is constructed from the output tracking error; to ensure convergence, an appropriate learning gain is selected based on system gradient information in place of an accurate dynamic model. As a partially model-free control method, CM-based ILC is applicable to non-affine-in-input systems. These features are highly desirable in practice as there are plenty of data available in industry processes but there is a shortage of accurate system models. CM-based ILC has been adopted in many applications, for example X-Y tables, chemical batch reactors, laser cutting systems, motor control, water heating systems, freeway traffic control, wafer manufacturing, and so on (Ahn et al., 2007a). A limitation of CM-based ILC is that it is only applicable to global Lipschitz continuous (GLC) systems. The GLC condition is required by ILC in order to form a contractive mapping, and rule out the finite escape time phenomenon. In comparison, CEF-based ILC, a complementary approach to CM-based ILC, applies a Lyapunov-like method to design learning rules. CEF is an effective method to handle locally Lipschitz continuous (LLC) systems, because system dynamics is used in the design of learning and feedback mechanisms. It is, however, worthwhile pointing out that in CM-based ILC, the learning mechanism only requires output signals, while in CEF-based ILC, full state information is usually required. CEF-based ILC has been applied in satellite trajectory keeping (Ahn et al. 2010) and robotic manipulator control (Tayebi, 2004; Tayebi and Islam, 2006; Sun et al., 2006).
This book follows the two main frameworks and investigates the multi-agent coordination problem using ILC. To illustrate the underlying idea and properties of ILC, we start with a simple ILC system.
Consider the following linear time-invariant dynamics:
where i is the iteration index, a is an unknown constant parameter, and T is the trial length. Let the target trajectory be xd(t), which is generated by
with ud(t) is the desired control signal. The control objective is to tune ui(t) such that without any prior knowledge about the parameter a, the tracking error can converge to zero as the iteration number increases, that is, for .
We perform the ILC controller design and convergence analysis for this simple control problem under the frameworks of both CM-based and CEF-based approaches, in order to illustrate the basic concepts in ILC and analysis techniques. To restrict our discussion, the following assumptions are imposed on the dynamical system (1.1).
The identical initialization condition holds for all iterations, that is, , .
For , , there exists a ud(t), such that implies , .
Under the framework of CM-based methodology, we apply the following D-type updating law to solve the trajectory tracking problem:
where is the learning gain to be determined. Our objective is to show that the ILC law (1.3) can converge to the desired ud, which implies the convergence of the tracking error ei(t), as i increases.
Define . First we can derive the relation
Furthermore, the state error dynamics is given by
Combining (1.4) and (1.5) gives:
Integrating both sides of the state error dynamics and using Assumption 1.1 yields
Then, substituting (1.7) into (1.6), we obtain
Taking ? -norm on both sides of (1.8) gives
where , and the ? -norm is defined as
The ? -norm is just a time weighted norm and is used to simplify the derivation. It will be formally defined in Section 1.4.
If in (1.9), it is possible to choose a sufficiently large such that . Therefore, (1.9) implies that , namely .
In this subsection, the ILC controller will be developed and analyzed under the framework of CEF-based approach. First of all, the error dynamics of the system (1.1) can be expressed as follows:
where xd is the target trajectory.
Let k be a positive constant. By applying the control law
and the parametric updating law ,
we can obtain the convergence of the tracking error ei as i tends to infinity.
In order to facilitate the convergence analysis of the proposed ILC scheme, we introduce the following CEF:
where is the estimation error of the unknown parameter a.
The difference of Ei is
By using the identical initialization condition as in Assumption 1.1, the error dynamics (1.10), and the control law (1.11), the first term on the right hand side of (1.14) can be calculated as
In addition, the second term on the right hand side of (1.14) can be expressed as
where the updating law (1.12) is applied. Clearly, ?ixiei appears in (1.15) and (1.16) with opposite signs. Combining (1.14), (1.15), and (1.16) yields
The function Ei is a monotonically decreasing sequence, hence is bounded if E0...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.