Formation Control of Multi-Agent Systems: A Graph Rigidity Approach
Marcio de Queiroz, Louisiana State University, USA
Xiaoyu Cai, FARO Technologies, USA
Matthew Feemster, U.S. Naval Academy, USA
A comprehensive guide to formation control of multi-agent systems using rigid graph theory
This book is the first to provide a comprehensive and unified treatment of the subject of graph rigidity-based formation control of multi-agent systems. Such systems are relevant to a variety of emerging engineering applications, including unmanned robotic vehicles and mobile sensor networks. Graph theory, and rigid graphs in particular, provides a natural tool for describing the multi-agent formation shape as well as the inter-agent sensing, communication, and control topology.
Beginning with an introduction to rigid graph theory, the contents of the book are organized by the agent dynamic model (single integrator, double integrator, and mechanical dynamics) and by the type of formation problem (formation acquisition, formation manoeuvring, and target interception). The book presents the material in ascending level of difficulty and in a self-contained manner; thus, facilitating reader understanding.
* Uses the concept of graph rigidity as the basis for describing the multi-agent formation geometry and solving formation control problems.
* Considers different agent models and formation control problems.
* Control designs throughout the book progressively build upon each other.
* Provides a primer on rigid graph theory.
* Combines theory, computer simulations, and experimental results.
Formation Control of Multi-Agent Systems: A Graph Rigidity Approach is targeted at researchers and graduate students in the areas of control systems and robotics. Prerequisite knowledge includes linear algebra, matrix theory, control systems, and nonlinear systems.
"The whole is more than the sum of its parts."
This book is devoted to multi-agent systems. Since this term has different meanings within different research communities, we deem it necessary to precisely define the meaning used here. In this book, a multi-agent system refers to a network of interacting, mobile, physical entities that collectively perform a complex task beyond their individual capabilities.
Nature is replete with biological systems that fit this definition: a flock of birds, a school of fish, and a colony of insects (see Figure 1.1), to name a few. The behavior of such biological swarms is decentralized since each biological agent does not have access to global knowledge or supervision, but uses its own local sensing, decision, and control mechanisms.
Ants are a model example of a biological multi-agent system. Ant colonies share the common goals of surviving, growing, and reproducing. Their sense of community is so strong that they behave like a single "superorganism" that can solve difficult problems by processing information as a collection 1. This collective behavior facilitates food gathering, defending nests against enemies, and building intricate structures with tunnels, chambers, and ventilation systems. Ants accomplish such feats without a supervisor telling them what to do. Rather, ant workers perform tasks based on personal aptitudes, communications with colony mates, and cues from the environment. Interactions with other ants and the environment occur via chemicals, which they sense with their antennae 1.
Nature is inspiring humans to engineer multi-agent systems that mimic this distributed, coordinated behavior. The agents in such engineering systems are not living beings, but machines such as robots, vehicles, and/or mobile sensors (see Figure 1.2). Recent advances in sensor technology, embedded systems, communication systems, and power storage now make it feasible to deploy such swarms of cooperating agents for various civilian and military applications. For instance, a group of autonomous (ground, underwater, water surface, or air) vehicles could be deployed in large disaster areas to perform search, mapping, surveillance, or environmental monitoring and clean up without putting first responders in harm's way. Some recent examples of such situations are Hurricane Katrina in 2005, the BP oil spill in the Gulf of Mexico in 2010, and the Fukushima nuclear disaster in 2011. Another application is a military mission where a group of unmanned air vehicles surround and intercept an intruding or evading aircraft or enemy combatants. Yet another potential application is a team of vehicles cooperatively transporting an object too large and/or heavy for a single vehicle to transport.
Figure 1.1Examples of collective behavior in nature: a flock of birds (top left), a school of fish (top right), and a swarm of ants building a bridge (bottom).
Figure 1.2Examples of engineering multi-agent systems.
One may wonder why a multi-agent system should be used instead of a single "large agent". There are several advantages to doing so: more efficient and complex task execution, robustness when one or more agents fail, scalability, versatility, adaptability, and lower cost 2. For example, multiple agents could position themselves relative to each other to create a virtual, large-scale antenna with higher sensitivity to acoustic signals than would be possible with a single antenna. If one of the agents malfunctions, the remaining ones would reconfigure to keep the antenna operational, whereas the stand-alone antenna would be a single point of failure. Malfunctions of an agent are also less likely than in a single system because they are usually much simpler hardware- and software-wise. This simplicity, along with larger quantities, also leads to mass production at a low cost.
On the other hand, multi-agent systems introduce a host of unique challenges, including coordination and cooperation schemes, distribution of information and subtasks, negotiation between team and individual goals, communication protocols, sensing, and collision avoidance. These challenges are exacerbated by the fact that often the task is to be completed with limited computational, communication, and sensing resources. A key design decision is between a centralized coordination scheme and a decentralized/distributed one. In a centralized scheme, each agent has access to measurement and/or control information from a master entity, such as a central processing unit or a global positioning system (GPS). Therefore, centralized schemes have a single point of failure like a single "large agent". They also do not scale well with the number of agents because the processing overhead and number of communication links become prohibitive.
The multi-agent literature has numerous references to decentralized and distributed schemes with the two terms used interchangeably. Unfortunately, there does not seem to be a precise definition for either concept, with different authors using different definitions. In order to avoid misunderstandings, it is important that we define what we mean by decentralized/distributed in this book.
A decentralized (or distributed) multi-agent coordination scheme is one where any information-cognitive, sensory, computational, control, etc.-is acquired locally by each agent via onboard hardware and software. This includes sensors, communication links, memory, and processors. That is, only the agents themselves are responsible for acquiring information and sharing it as necessary without external aid. Information can be input to an agent by onboard sensors or wireless communication with other agents, or pre-programmed into the agent.
Sometimes it is debatable if a coordination scheme can classified as centralized or decentralized. For example, in a leader-follower scheme, the leader agent can be viewed as the master entity despite being a local component of the multi-agent system. However, the coordination scheme can be designed such that if the leader malfunctions, another agent takes up this role with minimal disruption to the assignment. According to Definition 1.1, a leader-follower scheme where the leader is an integral part of the multi-agent system is deemed decentralized in this book.
A number of coordination and cooperation problems for multi-agent systems are described in the literature: aggregation, consensus, agreement, rendezvous, synchronization, social foraging, flocking, coverage, scheduling, and formation 2, 3, 4. Most of these problems are similar in that the purpose is to drive the multiple agents to some common state (position, velocity, frequency, arrival time, temperature, voltage, etc.), which in general may not be related to their motion. This book is devoted strictly to the class of formation problems. Specifically, our focus is on three related problems with increasing levels of complexity: formation acquisition (where agents are required to form and maintain a pre-defined geometric configuration in space), formation maneuvering (where agents are required to simultaneously acquire a formation and move as a unit following a pre-defined trajectory), and target interception (where agents intercept and surround a moving target with a pre-defined formation). Note that formation acquisition is a pre-condition for formation maneuvering and target interception. An example of an application where formation control is necessary is the use of a fleet of unmanned air vehicles (UAVs) to create an aerial image of a large area with high spatial resolution. The UAVs need to be properly positioned relative to each other such that each UAV image can be stitched together, with no gaps, to form the overall area map. Another example is maintaining the optimal placement of a network of mobile sensors during static and dynamic target tracking applications such that the collective sensing performance is improved 5. The classification of formation problems used in this book may have elements of and partially overlap with some of the coordination/cooperation problems listed above. For instance, formation maneuvering is related to flocking, where agents have to move cohesively without colliding with each other.
Often in formation control the main concern is convergence to the desired spatial configuration irrespective of its exact global position in space. That is, the formation is to be acquired up to rotation and translation of the whole set of agent positions. This means that only the relative positions of agents need to be known by the control algorithm. Formation control problems are relatively straightforward to solve when the agents' absolute coordinates (i.e., with respect to an Earth-fixed coordinate frame) are available via a central planner. From this information, the relative positions can be readily calculated. However, a GPS, which is typically used in such cases, can lose accuracy when the line of sight between the GPS receiver and satellite is obstructed (e.g., urban areas, dense vegetation, underwater). Therefore, one would like to use a decentralized formation scheme where information is obtained from a suite of onboard sensors...