Chapter 2: Control theory
The control of dynamical systems in engineered processes and machinery is under the purview of control engineering and applied mathematics. The goal is to create a model or algorithm that controls how system inputs are applied to move the system toward a desired state while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; this is frequently done with the intention of achieving some level of optimality.
A controller with the necessary corrective behavior is needed for this. The regulated process variable (PV) is monitored by this controller, and its value is compared to a reference or set point (SP). The error signal, also known as the SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. It is the difference between the actual and desired values of the process variable. The study of controllability and observability is another component. Automation designed with the aid of control theory has transformed the industrial, aviation, communications, and other industries and given rise to new ones like robotics.
The block diagram, a type of diagram, is frequently used extensively. It uses the differential equations characterizing the system to create a mathematical model of the relationship between the input and output known as the transfer function, also referred to as the system function or network function.
James Clerk Maxwell initially outlined the theoretical underpinnings of governor operation in the 19th century, which is when control theory first emerged. Although designing process control systems for industry is a key application of mathematical control theory, there are many more applications that go far beyond this. Control theory is applicable everywhere that feedback happens because it is the general theory of feedback systems; hence, it also has applications in the life sciences, computer engineering, sociology, and operations research.
Although several forms of control systems have existed since antiquity, James Clerk Maxwell's dynamics analysis of the centrifugal governor in his 1868 paper On Governors marked the beginning of a more rigorous understanding of the field.
The field of crewed flight was one in which dynamic control was prominently used. The Wright brothers' ability to control their flights for extended periods of time made them stand out. Their first successful test flight was on December 17, 1903. (more so than the ability to produce lift from an airfoil, which was known). Longer flights than a few seconds required constant, dependable control of the aircraft.
Second World War, Control theory was becoming as a significant field of study.
Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, developed autonomous flight control technology for aircraft using the bang-bang technique.
The use of discontinuous controls was also applicable in fire-control systems, electrical systems for guiding.
Mechanical techniques can occasionally be employed to increase a system's stability. Fins positioned below the water's surface and emerging laterally, for instance, are used as ship stabilizers. Modern ships may include active fins that may vary their angle of attack to prevent roll brought on the wind or waves acting on the ship. These fins are gyroscopically controlled.
Accurate spacecraft control was essential during the Space Race, and applications of control theory have grown in areas like economics and artificial intelligence. Finding an internal model that adheres to the good regulator theorem may be said to be the objective here. As a result, a (stock or commodities) trading model can more readily manipulate a market (and extract "useful work" (profits) from it) the more exactly it depicts the behaviors of the market. An AI example would be a chatbot that simulates the discourse state of humans. The better the chatbot can simulate the human state (for instance, on a voice-support hotline), the more effectively it can control the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These next two examples enlarge control theory's historically restricted understanding as a system of differential equations for modeling and controlling kinetic motion into a broad generalization of a regulator interacting with a plant.
Open-loop control (feedforward) and closed-loop control are the two basic forms of control loops (feedback).
When using open-loop control, the controller's control action is unrelated to the "process output" (or "controlled process variable"). A central heating boiler that is exclusively regulated by a timer, which ensures that heat is applied for a constant period of time regardless of the building's temperature, is an excellent illustration of this. Since the boiler is being controlled open-loop, which does not provide closed-loop control of the temperature, the controlled variable should be the building's temperature, but this is not the case.
The controller's control action during closed loop control is based on the process output. In the boiler instance, this would contain a thermostat to track the building's temperature and send a signal back to the controller to make sure it keeps the building at the set temperature. Therefore, a closed loop controller has a feedback loop that makes sure the controller exerts control action to produce a process output that is identical to the "reference input" or "set point." Closed loop controllers are also known as feedback controllers because of this.
A feedback control system, on the other hand, attempts to maintain a specified relationship between two system variables by comparing the functions of these variables and employing the difference as a control mechanism.
Contrary to an open-loop controller or non-feedback controller, a closed-loop controller or feedback controller has a control loop that includes feedback. Using feedback, a closed-loop controller regulates the states or outputs of a dynamical system. Its name is derived from the system's information flow: process inputs, such as voltage applied to an electric motor, have an impact on the process outputs, such as the motor's speed or torque, which are measured with sensors and processed by the controller; the result (the control signal), then, is "fed back" as input to the process, closing the loop.
A control loop made up of sensors, control algorithms, and actuators is built up in linear feedback systems in an effort to maintain a variable at a setpoint (SP). An example from daily life is the cruise control on a car, which allows the driver to vary the intended set speed in response to external factors like hills. By regulating the engine's power output, the PID algorithm in the controller optimally returns the real speed to the intended speed with little delay or overshoot. Feedback is used by control systems that can, to some extent, adapt to changing conditions and contain some sensing of the results they are seeking to attain. Open-loop control systems only operate in predetermined ways and do not use feedback.
The following benefits of closed-loop controllers over open-loop controllers:
rejection of disruption (such as hills in the cruise control example above)
assurance of effectiveness even with model errors, when the structure of the model does not exactly reflect the real process, and when the model parameters are not accurate
It is possible to stabilize unstable processes.
decreased sensitivity to parameter changes
better reference tracking results
There are certain systems that combine closed-loop and open-loop control. The open-loop control used in these systems is known as feedforward, and it helps to further increase the performance of reference tracking.
The PID controller is a typical closed-loop controller architecture.
There are two branches to the control theory field:
The output is roughly proportionate to the input in systems composed of components that adhere to the superposition principle, according to linear control theory. Linear differential equations control them. Systems having additional parameters that do not change over time are a significant subclass known as linear time invariant (LTI) systems. Powerful, highly general frequency domain mathematical tools, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion, can be used to these systems. With the help of concepts like bandwidth, frequency response, eigenvalues, gain, resonant frequencies, zeros, and poles, these result in a description of the system and provide answers for system response and design procedures for the majority of systems of interest.
A broader range of systems that do not adhere to the superposition principle are covered by nonlinear control theory, It is more applicable to systems in the actual world because all real control systems are nonlinear.
Most frequently, nonlinear differential equations govern these systems.
The few mathematical methods that have been created to deal with them are less common and more challenging, frequently just referring to specific types of systems.
Limit cycle theory is one of them, Poincaré maps, Theorem of Lyapunov stability, and outlining duties.
Numerical techniques on computers are frequently used to analyze nonlinear systems, for instance, by modeling how they function using a simulation language.
If only options that are close to a stable point are interesting, Using perturbation theory, it is frequently possible to linearize nonlinear systems by...