Optimal automatic control systems. Definition, features and general characteristics of optimal systems

Automatic systems that provide the best technical or technical-economic quality indicators under given real operating conditions and limitations are called optimal systems.
Optimal systems are divided into two classes:
- systems with “hard” settings, in which incomplete information does not interfere with achieving the control goal;
- adaptive systems in which incomplete information does not allow achieving the control goal without automatic adaptation of the system under conditions of uncertainty.
The goal of optimization is mathematically expressed as the requirement to ensure the minimum or maximum of some quality indicator, called the optimality criterion or objective function. The main quality criteria for automatic systems are: the cost of development, manufacturing and operation of the system; quality of operation (accuracy and speed); reliability; energy consumed; weight; volume, etc.

The quality of functioning is described by functional dependencies of the form:

where u are control coordinates; x - phase coordinates; f in - disturbances; t o and t k - the beginning and end of the process.
When developing optimal ACS, it is necessary to take into account the restrictions imposed on the system, which are of two types:
- natural, determined by the principle of operation of the object, for example, the operating speed of a hydraulic servomotor cannot be greater than with the dampers fully open, the speed of the motor cannot be more synchronous, etc.;
- artificial (conditional), which are deliberately introduced, for example, current limitations in the DPT for normal switching, heating, acceleration for normal well-being in the elevator, etc.
Optimality criteria can be scalar if they are represented by only one particular criterion, and vector (multi-criteria) if they are represented by a number of particular ones.
The time of the transition process can be taken as an optimality criterion those. An automatic control system is optimal in terms of performance if the minimum of this integral is ensured, taking into account the restrictions. Integral estimates of the quality of the transition process, known in TAU, are also accepted, for example, quadratic. As a criterion for the optimality of systems under random influences, the average value of the squared error of the system is used When controlling from sources with limited power, a functional is taken that characterizes the energy consumption for control where u(t) and i(t) are the voltage and current of the control circuit. Sometimes, as a criterion for the optimality of complex automatic control systems, the maximum profit of the technological process is taken I = g i P i - S, where g i is the price of the product; P i - productivity; S - costs.
Compared to less rigorous methods for designing closed-loop control systems, the advantages of optimization theory are as follows:
1). the design procedure is clearer, because includes all essential aspects of quality in a single design indicator;
2). obviously the designer can expect to obtain the best result according to a given quality indicator. Therefore, for the problem under consideration, the area of ​​restrictions is indicated;
3). incompatibility of a number of quality requirements can be detected;
4). the procedure directly includes prediction, because the quality indicator is assessed based on future values ​​of control time;
5). the resulting control system will be adaptive if the design indicator is reformulated during operation and the controller parameters are simultaneously calculated again;
6). determining optimal non-stationary processes does not introduce any additional difficulties;
7). Nonlinear objects are also considered directly, although the complexity of the calculations increases.



The difficulties inherent in optimization theory are as follows:
1). transforming various design requirements into a mathematically meaningful quality indicator is not an easy task; there may be trial and error;
2). existing optimal control algorithms for nonlinear systems require complex calculation programs and, in some cases, a large amount of computer time;
3). the quality indicator of the resulting control system is very sensitive to various kinds of erroneous assumptions and to changes in the parameters of the control object.

The optimization problem is solved in three stages:
1). construction of mathematical models of the physical process, as well as quality requirements. The mathematical model of quality requirements is an indicator of the quality of the system;
2). calculation of optimal control actions;
3). synthesis of a controller that generates optimal control signals.

Figure 10.1 shows the classification of optimal systems.

Definition and necessity of building optimal automatic control systems

Automatic control systems are usually designed based on the requirements to ensure certain quality indicators. In many cases, the necessary increase in dynamic accuracy and improvement of transient processes of automatic control systems is achieved with the help of corrective devices.

Particularly broad opportunities for improving quality indicators are provided by the introduction into the ACS of open-loop compensation channels and differential connections, synthesized from one or another condition of error invariance with respect to the master or disturbing influences. However, the effect of correction devices, open compensation channels and equivalent differential connections on the quality indicators of the ACS depends on the level of signal limitation by nonlinear elements of the system. The output signals of differentiating devices, usually short in duration and significant in amplitude, are limited to the elements of the system and do not lead to an improvement in the quality indicators of the system, in particular its speed. The best results in solving the problem of increasing the quality indicators of an automatic control system in the presence of signal limitations are obtained by the so-called optimal control.

The problem of synthesizing optimal systems was strictly formulated relatively recently, when the concept of an optimality criterion was defined. Depending on the control goal, various technical or economic indicators of the controlled process can be selected as an optimality criterion. In optimal systems, it is ensured not just a slight increase in one or another technical and economic quality indicator, but the achievement of its minimum or maximum possible value.

If the optimality criterion expresses technical and economic losses (system errors, transition process time, energy consumption, funds, cost, etc.), then the optimal control will be the one that provides the minimum optimality criterion. If it expresses profitability (efficiency, productivity, profit, missile range, etc.), then optimal control should provide the maximum optimality criterion.

The problem of determining the optimal automatic control system, in particular the synthesis of optimal parameters of the system when a master is received at its input

influence and interference, which are stationary random signals, were considered in Chap. 7. Let us recall that in this case, the root mean square error (RMS) is taken as the optimality criterion. The conditions for increasing the accuracy of reproduction of the useful signal (specifying influence) and suppressing interference are contradictory, and therefore the task arises of choosing such (optimal) system parameters at which the standard deviation takes the smallest value.

Synthesis of an optimal system using the mean square optimality criterion is a particular problem. General methods for synthesizing optimal systems are based on the calculus of variations. However, classical methods of the calculus of variations for solving modern practical problems that require taking into account restrictions, in many cases, turn out to be unsuitable. The most convenient methods for synthesizing optimal automatic control systems are Bellman's dynamic programming method and Pontryagin's maximum principle.

Thus, along with the problem of improving various quality indicators of automatic control systems, the problem arises of constructing optimal systems in which the extreme value of one or another technical and economic quality indicator is achieved.

The development and implementation of optimal automatic control systems helps to increase the efficiency of use of production units, increase labor productivity, improve product quality, save energy, fuel, raw materials, etc.

Concepts about the phase state and phase trajectory of an object

In technology, the task of transferring a controlled object (process) from one state to another often arises. For example, when designating targets, it is necessary to rotate the radar station antenna from the initial position with the initial azimuth to the specified position with the azimuth. To do this, control voltage is supplied to the electric motor connected to the antenna through a gearbox. At each moment of time, the state of the antenna is characterized by the current value of the rotation angle and angular velocity. These two quantities change depending on the control voltage and. Thus, there are three interconnected parameters and (Fig. 11.1).

The quantities characterizing the state of the antenna are called phase coordinates, and - control action. When target designating a radar such as a gun guidance station, the task arises of rotating the antenna in azimuth and elevation. In this case, we will have four phase coordinates of the object and two control actions. For a flying aircraft, we can consider six phase coordinates (three spatial coordinates and three velocity components) and several control actions (engine thrust, quantities characterizing the position of the rudders

Rice. 11.1. Diagram of an object with one control action and two phase coordinates.

Rice. 11.2. Diagram of the object with control actions and phase coordinates.

Rice. 11.3. Diagram of an object with a vector image of the control action and the phase state of the object

altitude and direction, ailerons). In the general case, at each moment of time, the state of an object is characterized by phase coordinates, and control actions can be applied to the object (Fig. 11.2).

The transfer of a controlled object (process) from one state to another should be understood not only as mechanical movement (for example, a radar antenna, aircraft), but also as the required change in various physical quantities: temperature, pressure, cabin humidity, chemical composition of a particular raw material with the appropriate controlled technological process.

It is convenient to consider control actions as the coordinates of a certain vector called the control action vector. The phase coordinates (state variables) of an object can also be considered as the coordinates of a certain vector or point in -dimensional space with coordinates. This point is called the phase state (state vector) of the object, and the -dimensional space in which phase states are depicted as points is called phase space (state space) of the object under consideration. When using vector images, the controlled object can be depicted as shown in Fig. 11.3, where and is the vector of the control action and represents a point in phase space that characterizes the phase state of the object. Under the influence of the control action, the phase point moves, describing a certain line in phase space, called the phase trajectory of the considered movement of the object.

In general, an automatic system consists of a control object and a set of devices that provide control of this object. As a rule, this set of devices includes measuring devices, amplifying and converting devices, as well as actuators. If we combine these devices into one link (control device), then the block diagram of the system looks like this:

In an automatic system, information about the state of the control object is supplied to the input of the control device through a measuring device. Such systems are called feedback systems or closed systems. The absence of this information in the control algorithm indicates that the system is open. We will describe the state of the control object at any time variables
, which are called system coordinates or state variables. It is convenient to consider them as coordinates - dimensional state vector.

The measuring device provides information about the state of the object. If based on the vector measurement
the values ​​of all coordinates can be found
state vector
, then the system is said to be completely observable.

The control device generates a control action
. There can be several such control actions; they form - dimensional control vector.

The input of the control device receives a reference input
. This input action carries information about what the state of the object should be. The control object may be subject to a disturbing influence
, which represents a load or disturbance. Measuring the coordinates of an object is usually carried out with some errors
, which are also random.

The task of the control device is to develop such a control action
so that the quality of functioning of the automatic system as a whole would be the best in some sense.

We will consider control objects that are manageable. That is, the state vector can be changed as required by correspondingly changing the control vector. We will assume that the object is completely observable.

For example, the position of an aircraft is characterized by six state coordinates. This
- coordinates of the center of mass,
- Euler angles, which determine the orientation of the aircraft relative to the center of mass. The aircraft's attitude can be changed using elevators, heading, aileron and thrust vectoring. Thus the control vector is defined as follows:

- elevator deflection angle

- well

- aileron

- traction

State vector
in this case it is defined as follows:

You can pose the problem of selecting a control with the help of which the aircraft is transferred from a given initial state
to a given final state
with minimal fuel consumption or in minimal time.

Additional complexity in solving technical problems arises due to the fact that, as a rule, various restrictions are imposed on the control action and on the state coordinates of the control object.

There are restrictions on any angle of the elevators, yaws, and ailerons:



- traction itself is limited.

The state coordinates of the control object and their derivatives are also subject to restrictions that are associated with permissible overloads.

We will consider control objects that are described by the differential equation:


(1)

Or in vector form:

--dimensional vector of object state

--dimensional vector of control actions

- function of the right side of equation (1)

To the control vector
a restriction is imposed, we will assume that its values ​​belong to some closed region some -dimensional space. This means that the executive function
belongs to the region at any time (
).

So, for example, if the coordinates of the control function satisfy the inequalities:


then the area is -measured cube.

Optimal control

Optimal control is the task of designing a system that provides, for a given control object or process, a control law or a control sequence of influences that ensures the maximum or minimum of a given set of system quality criteria.

To solve the optimal control problem, a mathematical model of the controlled object or process is constructed, describing its behavior over time under the influence of control actions and its own current state. The mathematical model for the optimal control problem includes: the formulation of the control goal, expressed through the control quality criterion; determination of differential or difference equations describing possible ways of movement of the control object; determination of restrictions on the resources used in the form of equations or inequalities.

The most widely used methods in the design of control systems are the calculus of variations, Pontryagin's maximum principle and Bellman dynamic programming.

Sometimes (for example, when managing complex objects, such as a blast furnace in metallurgy or when analyzing economic information), the initial data and knowledge about the controlled object when setting the optimal control problem contains uncertain or fuzzy information that cannot be processed by traditional quantitative methods. In such cases, you can use optimal control algorithms based on the mathematical theory of fuzzy sets (Fuzzy control). The concepts and knowledge used are converted into fuzzy form, fuzzy rules for deriving decisions are determined, and then the fuzzy decisions are converted back into physical control variables.

Optimal control problem

Let us formulate the optimal control problem:

here is the state vector - control, - the initial and final moments of time.

The optimal control problem is to find state and control functions for time that minimize the functionality.

Calculus of variations

Let us consider this optimal control problem as a Lagrange problem in the calculus of variations. To find the necessary conditions for an extremum, we apply the Euler-Lagrange theorem. The Lagrange function has the form: , where are the boundary conditions. The Lagrangian has the form: , where , , are n-dimensional vectors of Lagrange multipliers.

The necessary conditions for an extremum, according to this theorem, have the form:

Necessary conditions (3-5) form the basis for determining optimal trajectories. Having written these equations, we obtain a two-point boundary problem, where part of the boundary conditions is specified at the initial moment of time, and the rest at the final moment. Methods for solving such problems are discussed in detail in the book.

Pontryagin's maximum principle

The need for the Pontryagin maximum principle arises in the case when nowhere in the admissible range of the control variable is it possible to satisfy the necessary condition (3), namely .

In this case, condition (3) is replaced by condition (6):

(6)

In this case, according to Pontryagin's maximum principle, the value of optimal control is equal to the value of control at one of the ends of the admissible range. Pontryagin's equations are written using the Hamilton function H, defined by the relation. From the equations it follows that the Hamilton function H is related to the Lagrange function L as follows: . Substituting L from the last equation into equations (3-5) we obtain the necessary conditions expressed through the Hamilton function:

Necessary conditions written in this form are called Pontryagin equations. Pontryagin's maximum principle is discussed in more detail in the book.

Where is it used?

The maximum principle is especially important in control systems with maximum speed and minimum energy consumption, where relay-type controls are used that take extreme rather than intermediate values ​​within the permissible control interval.

Story

For the development of the theory of optimal control L.S. Pontryagin and his collaborators V.G. Boltyansky, R.V. Gamkrelidze and E.F. Mishchenko was awarded the Lenin Prize in 1962.

Dynamic programming method

The dynamic programming method is based on Bellman's principle of optimality, which is formulated as follows: the optimal control strategy has the property that whatever the initial state and control at the beginning of the process, subsequent controls must constitute an optimal control strategy relative to the state obtained after the initial stage of the process. The dynamic programming method is described in more detail in the book

Notes

Literature

  1. Rastrigin L.A. Modern principles of managing complex objects. - M.: Sov. radio, 1980. - 232 p., BBK 32.815, dash. 12000 copies
  2. Alekseev V.M., Tikhomirov V.M. , Fomin S.V. Optimal control. - M.: Nauka, 1979, UDC 519.6, - 223 pp., dash. 24000 copies

see also


Wikimedia Foundation. 2010.

See what “Optimal control” is in other dictionaries:

    Optimal control- OU Control that provides the most favorable value of a certain optimality criterion (OC), characterizing the effectiveness of control under given restrictions. Various technical or economic... ... Dictionary-reference book of terms of normative and technical documentation

    optimal control- Management, the purpose of which is to ensure the extreme value of the management quality indicator. [Collection of recommended terms. Issue 107. Management Theory. Academy of Sciences of the USSR. Committee of Scientific and Technical Terminology. 1984]… … Technical Translator's Guide

    Optimal control- 1. The basic concept of the mathematical theory of optimal processes (belonging to the branch of mathematics under the same name: “O.u.”); means the selection of control parameters that would provide the best from the point of... ... Economic and mathematical dictionary

    Allows, under given conditions (often contradictory), to achieve the goal in the best possible way, for example. in the minimum time, with the greatest economic effect, with maximum accuracy... Big Encyclopedic Dictionary

    Aircraft is a section of flight dynamics devoted to the development and use of optimization methods to determine the laws of motion control of the aircraft and its trajectories that provide the maximum or minimum of the selected criterion... ... Encyclopedia of technology

    A branch of mathematics that studies non-classical variational problems. Objects that technology deals with are usually equipped with “rudders”; with their help, a person controls movement. Mathematically, the behavior of such an object is described... ... Great Soviet Encyclopedia

    Allows, under given conditions (often contradictory), to achieve the goal in the best possible way, for example, in the minimum time, with the greatest economic effect, with maximum accuracy. * * * OPTIMUM MANAGEMENT OPTIMUM MANAGEMENT ... encyclopedic Dictionary

OPTIMAL AND ADAPTIVE SYSTEMS

(lectures, correspondence faculty, 5th year)

Lecture 1.

Introduction.

In the classical theory of automatic control (TAC), optimization and adaptation problems were posed mainly in relation to control “in the small”. This means that the optimal program for changing the technological process modes, expressed in the setting actions of the regulators, was considered known, determined at the design stage. The control task was to carry out this program and stabilize the program movement. In this case, only small deviations from the given movement were allowed, and transient processes “in the small” were optimized according to certain criteria.

In the late 50s - early 60s. XX century appeared works by L.S. Pontryagin (maximum principle), R. Bellman (dynamic programming), R. Kalman (optimal filtering, controllability and observability), who laid the foundations of the modern theory of automatic control, a generally accepted definition of which does not yet exist.

Most accurately, the modern theory of automatic control can be separated from the classical TAU, taking into account the requirements of scientific and technological progress, modern and future automation. The most important of these requirements is optimal use all available resources (energy, information, computing) to achieve the main generalized final goal while observing restrictions.

First of all, this optimization requires full use of the available a priori information in the form of a mathematical model of the controlled process or object. The use of such models not only at the design stage, but also during the operation of systems, is one of the characteristic features of the modern theory of automatic control.

Optimal control is possible only with optimal information processing. Therefore, the theory of optimal (and suboptimal) assessment (filtering) of dynamic processes is an integral part of the modern theory of automatic control. Particularly important is parametric identification (estimation of parameters and characteristics from experimental data), performed in real time in the operational modes of the op-amp.

True optimization of automatic control in conditions of incomplete a priori information is possible only during the functioning of the system in the current environment and the situation that has arisen. Therefore, a modern theory of automatic control must consider adaptive optimal(suboptimal) management “in the big”. In addition, the modern theory of automatic control should consider methods of redundancy and structural assurance of reliability (especially the principles of automatic system reconfiguration in the event of failures).

Definition, features and general characteristics of optimal systems.

The system that is best in some technical and economic sense is called optimal. Its main feature is the presence of two control goals, which these systems solve automatically.

The main goal of control is to maintain the controlled value at a given value and eliminate the resulting deviations of this value.

The goal of optimization is to ensure the best quality of control, determined by achieving the extremum of some technical and economic indicator, called the optimality criterion (OC).

Optimal systems are divided into two classes depending on the type of CO: statically optimal systems and dynamically optimal systems.

For statically optimal systems, QoS is a function of parameters or control actions. This criterion has an extremum in the static operating mode of the system, and the static characteristic, which expresses the dependence of the KO on the control actions of optimization, can shift in an unexpected way under the influence of disturbances. An optimal system must find and maintain this extremum. Such systems are applicable if the disturbances that shift the specified characteristic change relatively slowly compared to the duration of transient processes in the system. Then the system will have time to track the extreme in almost static mode. Such conditions are usually met at the top level of the management hierarchy.

Dynamically optimal systems are distinguished by the fact that their optimality criterion is a functional, i.e., a function of functions of time. This means that by specifying the time functions on which this functional depends, we obtain the numerical value of the functional. These systems can be used under relatively rapidly changing external influences, which, however, do not exceed permissible limits. Therefore, they are used at lower levels of management.

1.2. Optimality criteria for dynamically optimal systems

Usually these functionals have the form of definite integrals over time

Where x(t), u(t)- state and control vectors of this system;

T is the duration of the process (in particular, it can be T = ).

Depending on the integrand function f 0 these criteria have the following main types.

1. Linear functionals for which f 0 - linear function of variables:

Maximum performance criterion at f 0 1, i.e.

which is equal to the duration of the process, and the corresponding systems are called optimal in terms of speed;

Linear integral estimates

Maximum performance criterion

,

where q(t) is the quantity of products produced.

2. Quadratic functionals for which f 0 - quadratic form of the variables included in it:

Quadratic integral estimates of the quality of the transition process

;

The criterion for energy consumption for control, which has

,

Where u- control action, a and 2 - power spent on control;

A generalized quadratic criterion equal to the sum of the two previous ones, taken with certain weighting coefficients. It characterizes in a compromise the quality of the transition process and the energy consumption for it, i.e.

,

Where Q And R are positive definite square matrices. Functionals that do not contain integrals:

Minimax criterion, when optimizing according to which it is necessary to ensure the minimum value of the maximum module (norm) of the vector of deviation of the controlled process from its reference law of change, i.e.

, Where x e – standard law of change.

The simplest example of this criterion for the scalar case is the known maximum transient overshoot;

Final state function

which is a functional because the final state of the object X(T) is a function of the control action u(t). This optimality criterion can be used in combination with one of the criteria discussed above, which has the form of a definite integral.

The choice of one or another optimality criterion for a specific object or system is made on the basis of an appropriate study of the operation of the object and the technical and economic requirements imposed on it. This issue cannot be resolved within the framework of automatic control theory alone. Depending on the physical meaning of the optimality criterion, it must be either minimized or maximized. In the first case, it expresses losses, in the second case, technical and economic benefits. Formally, by changing the sign in front of the functional, we can reduce the maximization problem to the minimization problem.

Lecture 2.

1.3. Boundary conditions and restrictions
for dynamically optimal systems

The main goal of control in such systems is usually formulated as the task of transferring the representing point from some initial state x(O) to some final state x(T). The initial state is usually called the left end of the optimal trajectory, and the final state is called the right end. Taken together, these data form the boundary conditions. Control problems may differ in the type of boundary conditions.

1. Problem with fixed ends of the trajectory occurs when X(0) and X(T) fixed points in space.

2. Problem with moving ends of the trajectory it turns out when X(0) and X(T) belong to some known lines or surfaces of space.

3. Problem with free ends of the trajectory occurs when the specified points occupy arbitrary positions. In practice, there are also mixed problems, for example X(0) - fixed, and X(T) mobile. Such a task will take place if an object from a given fixed state must “catch up” with some reference trajectory (Fig. 1).

Constraints are additional conditions that control actions and controlled quantities must satisfy. There are two types of restrictions.

1. Unconditional (natural) restrictions, which are carried out due to physical laws for processes in the control object (OU). These restrictions show that some quantities and their functions cannot go beyond the boundaries defined by equalities or inequalities. For example, the DC motor equation is:

,

limitation on the speed of an asynchronous motor, where is the synchronous speed.

2. Conditional (artificial) restrictions, expressing such requirements for quantities or functions of them, according to which they should not exceed the boundaries defined by equalities or inequalities under the conditions of long-term and safe operation of objects. For example, restrictions on supply voltage, restrictions on permissible speed, acceleration, etc.

To ensure conditional restrictions, it is necessary to take circuit or software measures when implementing the corresponding control device.

Constraints, regardless of their type, expressed by equalities are called classical, and inequalities - non-classical.


Related information.