The dual task [1] for a given linear programming problem (LP, Linear programming , LP) is another linear programming problem that is obtained from the original ( direct ) problem as follows:
- Each variable in the direct problem becomes a limitation of the dual problem;
- Each constraint in a direct problem becomes a variable in a dual problem;
- The direction of the goal is reversed - the maximum in the direct problem becomes the minimum in the dual and vice versa.
The theorem states that the value of the dual problem for any feasible solution is always limited by the value of the direct problem for any feasible solution (upper or lower bound, depending on whether it is a maximization or minimization problem).
The strong duality theorem states that, moreover, if a direct problem has an optimal solution, then a dual problem has an optimal solution, and these two optima are equal [2] .
These theorems belong to a wider class of duality theorems in optimization . The strong duality theorem is one of the cases in which the duality gap (the gap between the optimum of the direct problem and the optimum of the dual) is 0.
The geometric meaning of the dual problem can be found in the book of Yudin and Holstein [3] . There you can read about the economic sense of the problem [4] .
Content
Construction of the dual problem
If a direct linear programming problem is given, the following algorithm can be used to construct the dual problem [5] .
Let the direct task be defined as:
- A set of n variables is given: .
- For each variable i, a restriction on the sign is defined - it must be either non-negative ( ), or non-positive ( ), or the restriction is not set ( ).
- The objective function is set: Maximize
- A list of m constraints is given. Each j constraint is equal to: where is the character before may be one of three - , or .
The dual task is constructed as follows.
- Each limitation of a direct problem becomes a dual variable. Thus, we get m variables: .
- The restriction sign of each dual variable is “opposite” to the restriction sign in the direct problem. In this way, " "Becomes , " " turns into , but " " turns into .
- The objective function of the dual task is (minimize)
- Each variable of the direct problem becomes a dual constraint. Thus, we obtain n constraints. The coefficient of the dual variable in dual constraints is equal to the coefficient of the variable from the constraint of the direct problem. Thus, each restriction i is: where is the character before similar to the restriction on the variable i in the direct problem. So, turns into " ", turns into " ", but turns into " ".
It is easy to see from this algorithm that the dual problem of the dual problem coincides with the direct problem.
Vector wording
If all restrictions have the same sign, the above method can be represented in a shorter form using vectors and matrices. The following table presents the relationships between different types of direct and dual tasks.
| Straight | Dual | Notes |
|---|---|---|
| Maximize under restrictions | Minimize under restrictions | Such a problem is called a “symmetric" dual problem. |
| Maximize under restrictions | Minimize under restrictions | Such a problem is called an “asymmetric” dual problem. |
| Maximize under restrictions | Minimize under restrictions |
Duality Theorems
Below we assume that the direct task is set as “Maximize under restrictions [restrictions] ", and the dual task is set as" Minimize under restrictions [restrictions]. "
Weak duality
The weak duality theorem states that for each feasible solution x of the direct problem and each feasible solution y of the dual problem: . In other words, the value of the objective function for each feasible solution of the dual problem is the upper limit of the objective function of the direct problem, and the value of the objective function of any feasible solution of the direct problem is the lower boundary for the objective function of the dual problem. It follows that
In particular, if the direct problem is not bounded (from above), then the dual problem does not have an admissible solution, and if the dual problem is not bounded (from below), then the direct problem does not have an admissible solution.
The weak duality theorem is relatively easy to prove [6] . Suppose a direct linear programming task sounds like “Maximize under restrictions ". Suppose we create a linear combination of constraints with positive coefficients such that the coefficients at x are not less than . This linear combination gives us the upper bound for the objective function. The variables y of the dual problem are the coefficients of this linear combination. The dual task is trying to find coefficients that minimize the resulting right-hand side. This gives the linear programming task “Minimize under restrictions " [7] . See “A Simple Example” below.
Strong duality
The strong duality theorem states that the boundaries defined by the weak duality theorem are rigid, i.e.,
The strong duality theorem is much more difficult to prove. Usually, the proof uses the weak duality theorem as a lemma [8] .
One proof uses the simplex method and relies on the proof that, with a suitable rule for selecting the displayed column, it gives the correct solution. The proof establishes that when the simplex method completes by solving the direct linear programming problem, we can read the solution of the dual problem from the final table. Thus, after running the simplex algorithm, we obtain solutions of both the direct and dual problems simultaneously [9] .
Another proof uses Farkash’s lemma [10]
Theoretical Application
Weak duality has an interesting theoretical application — it shows that finding a single feasible solution is as difficult as finding an optimal feasible solution. Suppose we have a prediction system that this linear programming problem finds an arbitrary feasible solution (if it exists). If the task sounds like “Maximize under restrictions ”, We can build another task by combining it with a dual task. The combined task has both x and y as variables:
Maximize 1
under restrictions
If the combined problem has an admissible solution ( x , y ), then by weak duality . Thus, x should be the maximum solution to the direct problem, and y should be the minimum solution to the dual problem. If the combined problem has no feasible solution, then the direct problem has no feasible solution.
Examples
A simple example
Consider a direct problem with two variables and one constraint:
- Maximize
- Under conditions
- Under conditions
Applying the above recipe for constructing a dual problem, we obtain a problem with one variable and two restrictions:
- Minimize
- Under conditions
It is easy to see that the maximum of the direct problem is achieved when the variable x 1 is minimized to its lower boundary (0), and the variable x 2 is maximized to its upper boundary defined by the constraint (7/6). The maximum is .
Similarly, the minimum of the dual problem is achieved when y 1 is minimized from its lower value under the constraints: the first constraint gives the value 3/5, while the second gives a stricter 4/6 boundary, so the actual minimum is 4/6 and the target minimum function equals .
According to the strong duality theorem, the maximum of the direct problem is equal to the minimum of the dual.
We use this example to illustrate the proof of the weak duality theorem. Suppose that in the direct linear programming problem we want to get the upper bound of the objective function . We can use a constraint multiplied by some factor, say, . For anyone we have: . Now if and then , so that . Therefore, the objective function of the dual problem is the upper boundary of the objective function of the direct problem.
Farmer's Example
Consider a farmer who can grow wheat and barley in area L using fertilizers F and pesticides P. To grow one unit of wheat per unit area, you need to use fertilizer units and units of pesticides.
The direct task will be the decision of the farmer how much wheat ( ) and barley ( ) grow if their selling prices are equal and for a unit.
- Maximize:
- (maximize income from growing wheat and barley)
- with restrictions:
- (the farmer cannot use more land than he has)
- (the farmer cannot use more fertilizer than is available)
- (the farmer cannot use more pesticides than he has)
- (it is impossible to grow a negative grain size).
- (maximize income from growing wheat and barley)
For a dual task, suppose that y price units for each of these product types (inputs) are represented by a planning group. The task of the planning group is to minimize the total cost of production for given values of resource consumption with determining the cost of a unit of resource (output). This corresponds to the following linear programming task:
- Minimize
- (minimize the total cost of production as a “target function”)
- with restrictions:
- (the farmer will spend at least S 1 per unit of wheat)
- (the farmer will spend at least S 2 per unit of barley)
- (prices cannot be negative).
- (minimize the total cost of production as a “target function”)
In matrix form:
- Minimize:
- under conditions:
The direct problem deals with physical quantities, when all quantities are limited and unit prices are known. The task is to determine how much product to produce in order to maximize total revenue. The dual task deals with economic quantities. The task is to determine, with fixed prices for products and known resource consumption, which pricing scheme to set to minimize total costs.
Each variable in the space of the direct problem corresponds to an inequality in the space of the dual problem. Each inequality in the space of the direct problem corresponds to a variable in the space of the dual problem.
Coefficients that limit inequalities in the space of a direct problem are used to calculate the objective function in dual space. The coefficients used to calculate the objective function in the space of the direct problem limit inequalities in the space of the dual problem.
Both direct and dual tasks use the same matrix. In the space of the direct problem, this matrix expresses the consumption of the physical quantities necessary for the production of the output product. In the space of the dual task, the matrix expresses the creation of economic values associated with the output product from the sets of input prices per unit of output.
Since each inequality can be replaced by equality and an additional variable, which means that each variable of the direct problem corresponds to a dual additional variable, and each dual variable corresponds to a direct additional variable. This attitude allows us to talk about the complementarity of additional variables.
Invalid task
The linear programming task may also be unlimited or unacceptable. The duality theory tells us that:
- If the direct problem is unlimited, then the dual problem is unacceptable;
- If the dual problem is unlimited, then the direct problem is unacceptable [11] .
However, it may be that both tasks, both dual and direct, are unacceptable. Here is an example:
| Maximize: | |
| Under conditions: | |
Applications
The maximum flow and minimum cut theorem is a special case of the strong duality theorem - maximizing the flow is a direct linear programming problem, and minimizing the cut is a dual linear programming problem. See the Ford-Fulkerson theorem .
Other graph related theorems can be proved using the strong duality theorem, in particular, the Koenig theorem [12] .
for games with zero sum can be proved using the strong duality theorem [13] .
Alternative Algorithm
Sometimes, you can find a more intuitive way to get a dual task without using the matrix of the problem. Consider the following linear programming problem:
- Minimize
- under conditions
- under conditions
We have conditions and all variables are non-negative. We need to determine dual variables: y j and s i . We get:
- Minimize
- under conditions
- under conditions
Since this is a minimization problem, we would like to obtain a dual problem, which is the lower boundary of the direct problem. In other words, we would like the sum of all the right-hand sides of the constraints to be maximum under the conditions that for each variable of the direct problem the sum of its coefficients does not exceed the coefficient in a linear function. For example, x 1 appears in restrictions. If we sum the constraint coefficients, we get . This amount should not exceed c 1 . As a result, we get:
- Maximize
- under restrictions
- under restrictions
The calculations above assume that the task is presented in standard form. This assumption does not affect the generality of reasoning, since any linear programming problem can be reduced to a standard form.
Interpretations in real life
The duality theorem has an economic interpretation. If we interpret the direct linear programming problem as the classical “resource allocation” problem, its dual problem can be interpreted as the problem [14] . See article. The economic interpretation of the dual task can also be found in the book of Lungu [15] .
The duality theorem also has a physical interpretation [16] .
Notes
- ↑ Sometimes the term Conjugate problem is used , as, for example, in the book of Yudin and Holstein ( Yudin, Holstein 1969 , 149) or in the book of Lungu ( Lungu 2005 , 67). In the second book, the direct task is also called the main task .
- ↑ Gartner, Matousek, 2006 , p. 81-104.
- ↑ Yudin, Holstein, 1969 , p. 150-152 Clause 5.2.
- ↑ Yudin, Holstein, 1969 , p. 157-159 Paragraph 5.5.
- ↑ Gartner, Matousek, 2006 , p. 85.
- ↑ The proof of a very close statement from which this theorem follows can be found in the book of Yudin and Holstein ( 1969 , 159, Lemma 5.1)
- ↑ Gartner, Matousek, 2006 , p. 81–83.
- ↑ The proof can be found in the book of Yudin and Holstein, where it is called the “first duality theorem” ( Yudin, Holstein 1969 , 164, Theorem 6.1)
- ↑ Gartner, Matousek, 2006 , p. 87–89.
- ↑ Gartner, Matousek, 2006 , p. 81–94.
- ↑ Yudin, Holstein, 1969 , p. 162, Lemma 5.3.
- ↑ AA Ahmadi Lecture 6: linear programming and matching . Princeton University (2016).
- ↑ Gartner, Matousek, 2006 , p. sub.8.1.
- ↑ In the book of Yudin and Holstein, the term preliminary estimates (production factors) is used for shadow prices.
- ↑ Lungu, 2005 , p. 68 Clause 5.4.
- ↑ Gartner, Matousek, 2006 , p. 86–87.
Literature
- Jiri Matousek, Bernd Gärtner. Understanding and Using Linear Programming. - Springer, 2006. - S. 81–104. - (Universitext).
- Yudin D.B., Holstein E.G. Linear programming (theory, methods and applications). - Moscow: “Science”, 1969.
- Lungu K.N. Linear programming. Guide to solving problems. - M .: FIZMATLIT, 2005. - ISBN 5-9221-0631-7 .