Key concepts - guidosassaroli/controlbasics GitHub Wiki

This section introduces the fundamental concepts of control theory, serving as a foundation for the topics discussed in the subsequent chapters. By providing a clear overview of the main principles and terminology, it aims to establish a solid understanding that will support the reader’s comprehension.

Stability

Stability refers to a system's ability to return to and remain near an equilibrium state when disturbed, or to maintain bounded outputs for bounded inputs. A system is considered stable if its output remains within a finite range despite variations in parameters or external disturbances.

In simpler terms, an equilibrium point $x_e$ is said to be Lyapunov stable if all solutions that begin close to $x_e$ remain close to it for all future time. Furthermore, if $x_e$ is Lyapunov stable and all nearby solutions eventually converge to $x_e$, then $x_e$ is considered asymptotically stable.

Controllability and Observability

In control systems, controllability refers to the ability to manipulate a system's state using external inputs, while observability refers to the ability to infer the system's internal state from its external outputs.

Controllability:
Controllability refers to the ability to drive an initial system state to a desired final state within a finite time by choosing appropriate control inputs. In other words, a system is said to be completely controllable if it is possible to arbitrarily assign values to its states by proper choice of inputs.

Observability:
In a state-space representation, a physical system is considered observable if the current state of the system can be determined from its outputs alone, regardless of the evolution of its state and control inputs. This means that, based solely on the output data (typically obtained from sensors), one can infer the internal behavior of the entire system. Conversely, if a system is not observable, there exist state trajectories that cannot be uniquely identified using only the output measurements. In such cases, different internal states may produce identical outputs, making it impossible to fully reconstruct the system’s state from output data alone.

Continuous versus Discrete Time

Continuous-time systems are those where signals and system dynamics evolve continuously over time. These systems are typically described using differential equations, where the behavior of the system is tracked at every instant. Continuous-time models are ideal for representing physical systems like mechanical or electrical processes where changes occur in an uninterrupted flow. They are commonly analyzed using tools such as Laplace transforms and are implemented in analog control systems or through digital controllers that approximate continuous behavior with high-speed sampling.

Discrete-time systems represent signals and dynamics at specific, separate time intervals. These systems are described using difference equations and are inherently digital, making them suitable for computer-based control and signal processing. Discrete-time models arise naturally when a continuous system is sampled by sensors or when digital controllers are used. While they require techniques like the Z-transform for analysis, they offer advantages in implementation flexibility and robustness in the presence of noise and uncertainties. The choice between continuous and discrete time depends on the nature of the system and the design constraints of the control application.

Linear versus Nonlinear Systems Control

Linear control techniques deal with systems that can be modeled using linear equations, where the principle of superposition applies. These techniques are widely used because they offer analytical simplicity and well-established design tools such as root locus, Bode plots, and state-space analysis. Linear control is effective when the system dynamics are approximately linear or can be linearized around a specific operating point. Common linear controllers include Proportional-Integral-Derivative (PID) controllers and Linear Quadratic Regulators (LQR), which are efficient and robust for a wide range of practical applications.

Nonlinear control techniques are used for systems with dynamics that cannot be accurately captured by linear models, such as those with saturation, dead zones, or time-varying parameters. These systems do not obey the principle of superposition, making their analysis and control more complex. Nonlinear control methods, such as feedback linearization, sliding mode control, and Lyapunov-based design, are tailored to handle such complexities. While more challenging to implement and analyze, nonlinear control is essential for achieving desired performance and stability in systems where linear approximations are insufficient or fail altogether.