Clever Geek Handbook
📜 ⬆️ ⬇️

The dual linear programming problem

The dual task [1] for a given linear programming problem (LP, Linear programming , LP) is another linear programming problem that is obtained from the original ( direct ) problem as follows:

  • Each variable in the direct problem becomes a limitation of the dual problem;
  • Each constraint in a direct problem becomes a variable in a dual problem;
  • The direction of the goal is reversed - the maximum in the direct problem becomes the minimum in the dual and vice versa.

The theorem states that the value of the dual problem for any feasible solution is always limited by the value of the direct problem for any feasible solution (upper or lower bound, depending on whether it is a maximization or minimization problem).

The strong duality theorem states that, moreover, if a direct problem has an optimal solution, then a dual problem has an optimal solution, and these two optima are equal [2] .

These theorems belong to a wider class of duality theorems in optimization . The strong duality theorem is one of the cases in which the duality gap (the gap between the optimum of the direct problem and the optimum of the dual) is 0.

The geometric meaning of the dual problem can be found in the book of Yudin and Holstein [3] . There you can read about the economic sense of the problem [4] .

Content

Construction of the dual problem

If a direct linear programming problem is given, the following algorithm can be used to construct the dual problem [5] .

Let the direct task be defined as:

  • A set of n variables is given:xone,...,xn {\ displaystyle x_ {1}, \ ldots, x_ {n}}   .
  • For each variable i, a restriction on the sign is defined - it must be either non-negative (xi⩾0 {\ displaystyle x_ {i} \ geqslant 0}   ), or non-positive (xi⩽0 {\ displaystyle x_ {i} \ leqslant 0}   ), or the restriction is not set (xi∈R {\ displaystyle x_ {i} \ in \ mathbb {R}}   ).
  • The objective function is set: Maximizeconexone+⋯+cnxn {\ displaystyle c_ {1} x_ {1} + \ cdots + c_ {n} x_ {n}}  
  • A list of m constraints is given. Each j constraint is equal to:ajonexone+⋯+ajnxn⪋bj {\ displaystyle a_ {j1} x_ {1} + \ cdots + a_ {jn} x_ {n} \ lesseqqgtr b_ {j}}   where is the character beforebj {\ displaystyle b_ {j}}   may be one of three -⩾ {\ displaystyle \ geqslant}   ,⩽ {\ displaystyle \ leqslant}   or= {\ displaystyle =}   .

The dual task is constructed as follows.

  • Each limitation of a direct problem becomes a dual variable. Thus, we get m variables:yone,...,ym {\ displaystyle y_ {1}, \ ldots, y_ {m}}   .
  • The restriction sign of each dual variable is “opposite” to the restriction sign in the direct problem. In this way, "⩾bj {\ displaystyle \ geqslant b_ {j}}   "Becomesyj⩽0 {\ displaystyle y_ {j} \ leqslant 0}   , "⩽bj {\ displaystyle \ leqslant b_ {j}}   " turns intoyj⩾0 {\ displaystyle y_ {j} \ geqslant 0}   , but "=bj {\ displaystyle = b_ {j}}   " turns intoyj∈R {\ displaystyle y_ {j} \ in \ mathbb {R}}   .
  • The objective function of the dual task is (minimize)boneyone+⋯+bmym {\ displaystyle b_ {1} y_ {1} + \ cdots + b_ {m} y_ {m}}  
  • Each variable of the direct problem becomes a dual constraint. Thus, we obtain n constraints. The coefficient of the dual variable in dual constraints is equal to the coefficient of the variable from the constraint of the direct problem. Thus, each restriction i is:aoneiyone+⋯+amiym⪋ci {\ displaystyle a_ {1i} y_ {1} + \ cdots + a_ {mi} y_ {m} \ lesseqqgtr c_ {i}}   where is the character beforeci {\ displaystyle c_ {i}}   similar to the restriction on the variable i in the direct problem. So,xi⩽0 {\ displaystyle x_ {i} \ leqslant 0}   turns into "⩽ci {\ displaystyle \ leqslant c_ {i}}   ",xi⩾0 {\ displaystyle x_ {i} \ geqslant 0}   turns into "⩾ci {\ displaystyle \ geqslant c_ {i}}   ", butxi∈R {\ displaystyle x_ {i} \ in \ mathbb {R}}   turns into "=ci {\ displaystyle = c_ {i}}   ".

It is easy to see from this algorithm that the dual problem of the dual problem coincides with the direct problem.

Vector wording

If all restrictions have the same sign, the above method can be represented in a shorter form using vectors and matrices. The following table presents the relationships between different types of direct and dual tasks.

StraightDualNotes
MaximizecTx {\ displaystyle \ mathbf {c} ^ {T} \ mathbf {x}}   under restrictionsAx⩽b,x⩾0 {\ displaystyle \ mathbf {A} \ mathbf {x} \ leqslant \ mathbf {b}, \ mathbf {x} \ geqslant 0}  MinimizebTy {\ displaystyle \ mathbf {b} ^ {T} \ mathbf {y}}   under restrictionsATy⩾c,y⩾0 {\ displaystyle \ mathbf {A} ^ {T} \ mathbf {y} \ geqslant \ mathbf {c}, \ mathbf {y} \ geqslant 0}  Such a problem is called a “symmetric" dual problem.
MaximizecTx {\ displaystyle \ mathbf {c} ^ {T} \ mathbf {x}}   under restrictionsAx⩽b {\ displaystyle \ mathbf {Ax} \ leqslant \ mathbf {b}}  MinimizebTy {\ displaystyle \ mathbf {b} ^ {T} \ mathbf {y}}   under restrictionsATy=c,y⩾0 {\ displaystyle \ mathbf {A} ^ {T} \ mathbf {y} = \ mathbf {c}, \ mathbf {y} \ geqslant 0}  Such a problem is called an “asymmetric” dual problem.
MaximizecTx {\ displaystyle \ mathbf {c} ^ {T} \ mathbf {x}}   under restrictionsAx=b,x⩾0 {\ displaystyle \ mathbf {Ax} = \ mathbf {b}, \ mathbf {x} \ geqslant 0}  MinimizebTy {\ displaystyle \ mathbf {b} ^ {T} \ mathbf {y}}   under restrictionsATy⩾c {\ displaystyle \ mathbf {A} ^ {T} \ mathbf {y} \ geqslant \ mathbf {c}}  

Duality Theorems

Below we assume that the direct task is set as “MaximizecTx {\ displaystyle \ mathbf {c} ^ {T} \ mathbf {x}}   under restrictions [restrictions] ", and the dual task is set as" MinimizebTy {\ displaystyle \ mathbf {b} ^ {T} \ mathbf {y}}   under restrictions [restrictions]. "

Weak duality

The weak duality theorem states that for each feasible solution x of the direct problem and each feasible solution y of the dual problem:cTx⩽bTy {\ displaystyle \ mathbf {c} ^ {T} \ mathbf {x} \ leqslant \ mathbf {b} ^ {T} \ mathbf {y}}   . In other words, the value of the objective function for each feasible solution of the dual problem is the upper limit of the objective function of the direct problem, and the value of the objective function of any feasible solution of the direct problem is the lower boundary for the objective function of the dual problem. It follows that

maxxcTx⩽minybTy{\ displaystyle \ max _ {\ mathbf {x}} \ mathbf {c} ^ {T} \ mathbf {x} \ leqslant \ min _ {\ mathbf {y}} \ mathbf {b} ^ {T} \ mathbf {y}}  

In particular, if the direct problem is not bounded (from above), then the dual problem does not have an admissible solution, and if the dual problem is not bounded (from below), then the direct problem does not have an admissible solution.

The weak duality theorem is relatively easy to prove [6] . Suppose a direct linear programming task sounds like “MaximizecTx {\ displaystyle \ mathbf {c} ^ {T} \ mathbf {x}}   under restrictionsAx⩽b,x⩾0 {\ displaystyle \ mathbf {Ax} \ leqslant \ mathbf {b}, \ mathbf {x} \ geqslant 0}   ". Suppose we create a linear combination of constraints with positive coefficients such that the coefficients at x are not less thancT {\ displaystyle \ mathbf {c} ^ {T}}   . This linear combination gives us the upper bound for the objective function. The variables y of the dual problem are the coefficients of this linear combination. The dual task is trying to find coefficients that minimize the resulting right-hand side. This gives the linear programming task “MinimizebTy {\ displaystyle \ mathbf {b} ^ {T} \ mathbf {y}}   under restrictionsATy⩾c,y⩾0 {\ displaystyle \ mathbf {A} ^ {T} \ mathbf {y} \ geqslant \ mathbf {c}, \ mathbf {y} \ geqslant 0}   " [7] . See “A Simple Example” below.

Strong duality

The strong duality theorem states that the boundaries defined by the weak duality theorem are rigid, i.e.,

maxxcTx=minybTy{\ displaystyle \ max _ {\ mathbf {x}} \ mathbf {c} ^ {T} \ mathbf {x} = \ min _ {\ mathbf {y}} \ mathbf {b} ^ {T} \ mathbf { y}}  

The strong duality theorem is much more difficult to prove. Usually, the proof uses the weak duality theorem as a lemma [8] .

One proof uses the simplex method and relies on the proof that, with a suitable rule for selecting the displayed column, it gives the correct solution. The proof establishes that when the simplex method completes by solving the direct linear programming problem, we can read the solution of the dual problem from the final table. Thus, after running the simplex algorithm, we obtain solutions of both the direct and dual problems simultaneously [9] .

Another proof uses Farkash’s lemma [10]

Theoretical Application

Weak duality has an interesting theoretical application — it shows that finding a single feasible solution is as difficult as finding an optimal feasible solution. Suppose we have a prediction system that this linear programming problem finds an arbitrary feasible solution (if it exists). If the task sounds like “Maximizecx {\ displaystyle \ mathbf {c} ^ {\ mathbf {x}}}   under restrictionsAx⩽b,x⩾0 {\ displaystyle \ mathbf {Ax} \ leqslant \ mathbf {b}, \ mathbf {x} \ geqslant 0}   ”, We can build another task by combining it with a dual task. The combined task has both x and y as variables:

Maximize 1

under restrictionsAx⩽b,ATy⩾c,cTx⩾bTy,x⩾0,y⩾0 {\ displaystyle \ mathbf {Ax} \ leqslant \ mathbf {b}, \ mathbf {A} ^ {T} \ mathbf {y} \ geqslant \ mathbf {c}, \ mathbf {c} ^ {T} \ mathbf { x} \ geqslant \ mathbf {b} ^ {T} \ mathbf {y}, \ mathbf {x} \ geqslant 0, \ mathbf {y} \ geqslant 0}  

If the combined problem has an admissible solution ( x , y ), then by weak dualitycTx=bTy {\ displaystyle \ mathbf {c} ^ {T} \ mathbf {x} = \ mathbf {b} ^ {T} \ mathbf {y}}   . Thus, x should be the maximum solution to the direct problem, and y should be the minimum solution to the dual problem. If the combined problem has no feasible solution, then the direct problem has no feasible solution.

Examples

A simple example

Consider a direct problem with two variables and one constraint:

Maximize3xone+fourx2 {\ displaystyle 3x_ {1} + 4x_ {2}}  
Under conditions
fivexone+6x2=7xone⩾0,x2⩾0{\ displaystyle {\ begin {aligned} & 5x_ {1} + 6x_ {2} = 7 \\ & x_ {1} \ geqslant 0, x_ {2} \ geqslant 0 \ end {aligned}}}  

Applying the above recipe for constructing a dual problem, we obtain a problem with one variable and two restrictions:

Minimize7yone {\ displaystyle 7y_ {1}}  
Under conditions

fiveyone⩾36yone⩾fouryone∈R{\ displaystyle {\ begin {aligned} && 5y_ {1} \ geqslant 3 \\ && 6y_ {1} \ geqslant 4 \\ && y_ {1} \ in \ mathbb {R} \ end {aligned}}}  

It is easy to see that the maximum of the direct problem is achieved when the variable x 1 is minimized to its lower boundary (0), and the variable x 2 is maximized to its upper boundary defined by the constraint (7/6). The maximum isfour⋅7/6=14/3 {\ displaystyle 4 \ cdot 7/6 = 14/3}   .

Similarly, the minimum of the dual problem is achieved when y 1 is minimized from its lower value under the constraints: the first constraint gives the value 3/5, while the second gives a stricter 4/6 boundary, so the actual minimum is 4/6 and the target minimum function equals7⋅four/6=14/3 {\ displaystyle 7 \ cdot 4/6 = 14/3}   .

According to the strong duality theorem, the maximum of the direct problem is equal to the minimum of the dual.

We use this example to illustrate the proof of the weak duality theorem. Suppose that in the direct linear programming problem we want to get the upper bound of the objective function3xone+fourx2 {\ displaystyle 3x_ {1} + 4x_ {2}}   . We can use a constraint multiplied by some factor, say,yone {\ displaystyle y_ {1}}   . For anyoneyone {\ displaystyle y_ {1}}   we have:yone⋅(fivexone+6x2)=7yone {\ displaystyle y_ {1} \ cdot (5x_ {1} + 6x_ {2}) = 7y_ {1}}   . Now ifyone⋅fivexone⩾3xone {\ displaystyle y_ {1} \ cdot 5x_ {1} \ geqslant 3x_ {1}}   andyone⋅6x2⩾fourx2 {\ displaystyle y_ {1} \ cdot 6x_ {2} \ geqslant 4x_ {2}}   thenyone⋅(fivexone+6x2)⩾3xone+fourx2 {\ displaystyle y_ {1} \ cdot (5x_ {1} + 6x_ {2}) \ geqslant 3x_ {1} + 4x_ {2}}   , so that7yone⩾3xone+fourx2 {\ displaystyle 7y_ {1} \ geqslant 3x_ {1} + 4x_ {2}}   . Therefore, the objective function of the dual problem is the upper boundary of the objective function of the direct problem.

Farmer's Example

Consider a farmer who can grow wheat and barley in area L using fertilizers F and pesticides P. To grow one unit of wheat per unit area, you need to useFone {\ displaystyle F_ {1}}   fertilizer units andPone {\ displaystyle P_ {1}}   units of pesticides.

The direct task will be the decision of the farmer how much wheat (xone {\ displaystyle x_ {1}}   ) and barley (x2 {\ displaystyle x_ {2}}   ) grow if their selling prices are equalSone {\ displaystyle S_ {1}}   andS2 {\ displaystyle S_ {2}}   for a unit.

Maximize:
Sone⋅xone+S2⋅x2{\ displaystyle S_ {1} \ cdot x_ {1} + S_ {2} \ cdot x_ {2}}   (maximize income from growing wheat and barley)
with restrictions:
xone+x2⩽L{\ displaystyle x_ {1} + x_ {2} \ leqslant L}   (the farmer cannot use more land than he has)
Fone⋅xone+F2⋅x2⩽F{\ displaystyle F_ {1} \ cdot x_ {1} + F_ {2} \ cdot x_ {2} \ leqslant F}   (the farmer cannot use more fertilizer than is available)
Pone⋅xone+P2⋅x2⩽P{\ displaystyle P_ {1} \ cdot x_ {1} + P_ {2} \ cdot x_ {2} \ leqslant P}   (the farmer cannot use more pesticides than he has)
xone,x2⩾0{\ displaystyle x_ {1}, x_ {2} \ geqslant 0}   (it is impossible to grow a negative grain size).

For a dual task, suppose that y price units for each of these product types (inputs) are represented by a planning group. The task of the planning group is to minimize the total cost of production for given values ​​of resource consumption with determining the cost of a unit of resource (output). This corresponds to the following linear programming task:

Minimize
L⋅yL+F⋅yF+P⋅yP{\ displaystyle L \ cdot y_ {L} + F \ cdot y_ {F} + P \ cdot y_ {P}}   (minimize the total cost of production as a “target function”)
with restrictions:
yL+Fone⋅yF+Pone⋅yP⩾Sone{\ displaystyle y_ {L} + F_ {1} \ cdot y_ {F} + P_ {1} \ cdot y_ {P} \ geqslant S_ {1}}   (the farmer will spend at least S 1 per unit of wheat)
yL+F2⋅yF+P2⋅yP⩾S2{\ displaystyle y_ {L} + F_ {2} \ cdot y_ {F} + P_ {2} \ cdot y_ {P} \ geqslant S_ {2}}   (the farmer will spend at least S 2 per unit of barley)
yL,yF,yP⩾0{\ displaystyle y_ {L}, y_ {F}, y_ {P} \ geqslant 0}   (prices cannot be negative).

In matrix form:

Minimize:[LFP][yLyFyP] {\ displaystyle {\ begin {bmatrix} L & F & P \ end {bmatrix}} {\ begin {bmatrix} y_ {L} \\ y_ {F} \\ y_ {P} \ end {bmatrix}}}  
under conditions:[oneFonePoneoneF2P2][yLyFyP]⩾[SoneS2],[yLyFyP]⩾0. {\ displaystyle {\ begin {bmatrix} 1 & F_ {1} & P_ {1} \\ 1 & F_ {2} & P_ {2} \ end {bmatrix}} {\ begin {bmatrix} y_ {L} \\ y_ {F} \ \ y_ {P} \ end {bmatrix}} \ geqslant {\ begin {bmatrix} S_ {1} \\ S_ {2} \ end {bmatrix}}, \, {\ begin {bmatrix} y_ {L} \\ y_ {F} \\ y_ {P} \ end {bmatrix}} \ geqslant 0.}  

The direct problem deals with physical quantities, when all quantities are limited and unit prices are known. The task is to determine how much product to produce in order to maximize total revenue. The dual task deals with economic quantities. The task is to determine, with fixed prices for products and known resource consumption, which pricing scheme to set to minimize total costs.

Each variable in the space of the direct problem corresponds to an inequality in the space of the dual problem. Each inequality in the space of the direct problem corresponds to a variable in the space of the dual problem.

Coefficients that limit inequalities in the space of a direct problem are used to calculate the objective function in dual space. The coefficients used to calculate the objective function in the space of the direct problem limit inequalities in the space of the dual problem.

Both direct and dual tasks use the same matrix. In the space of the direct problem, this matrix expresses the consumption of the physical quantities necessary for the production of the output product. In the space of the dual task, the matrix expresses the creation of economic values ​​associated with the output product from the sets of input prices per unit of output.

Since each inequality can be replaced by equality and an additional variable, which means that each variable of the direct problem corresponds to a dual additional variable, and each dual variable corresponds to a direct additional variable. This attitude allows us to talk about the complementarity of additional variables.

Invalid task

The linear programming task may also be unlimited or unacceptable. The duality theory tells us that:

  • If the direct problem is unlimited, then the dual problem is unacceptable;
  • If the dual problem is unlimited, then the direct problem is unacceptable [11] .

However, it may be that both tasks, both dual and direct, are unacceptable. Here is an example:

Maximize:2xone-x2 {\ displaystyle 2x_ {1} -x_ {2}}  
Under conditions:xone-x2⩽one{\ displaystyle x_ {1} -x_ {2} \ leqslant 1}  
-xone+x2⩽-2{\ displaystyle -x_ {1} + x_ {2} \ leqslant -2}  
xone,x2⩾0.{\ displaystyle x_ {1}, x_ {2} \ geqslant 0.}  

Applications

The maximum flow and minimum cut theorem is a special case of the strong duality theorem - maximizing the flow is a direct linear programming problem, and minimizing the cut is a dual linear programming problem. See the Ford-Fulkerson theorem .

Other graph related theorems can be proved using the strong duality theorem, in particular, the Koenig theorem [12] .

for games with zero sum can be proved using the strong duality theorem [13] .

Alternative Algorithm

Sometimes, you can find a more intuitive way to get a dual task without using the matrix of the problem. Consider the following linear programming problem:

MinimizeΣi=onemcixi+Σj=onendjtj {\ displaystyle \ sum _ {i = 1} ^ {m} c_ {i} x_ {i} + \ sum _ {j = 1} ^ {n} d_ {j} t_ {j}}  
under conditions
Σi=onemaijxi+ejtj⩾gj,one⩽j⩽n{\ displaystyle \ sum _ {i = 1} ^ {m} a_ {ij} x_ {i} + e_ {j} t_ {j} \ geqslant g_ {j}, 1 \ leqslant j \ leqslant n}  
fixi+Σj=onenbijtj⩾hi,one⩽i⩽m{\ displaystyle f_ {i} x_ {i} + \ sum _ {j = 1} ^ {n} b_ {ij} t_ {j} \ geqslant h_ {i}, 1 \ leqslant i \ leqslant m}  
xi⩾0,tj⩾0,one⩽i⩽m,one⩽j⩽n{\ displaystyle x_ {i} \ geqslant 0, \, t_ {j} \ geqslant 0,1 \ leqslant i \ leqslant m, 1 \ leqslant j \ leqslant n}  

We havem+n {\ displaystyle m + n}   conditions and all variables are non-negative. We need to determinem+n {\ displaystyle m + n}   dual variables: y j and s i . We get:

MinimizeΣi=onemcixi+Σj=onendjtj {\ displaystyle \ sum _ {i = 1} ^ {m} c_ {i} x_ {i} + \ sum _ {j = 1} ^ {n} d_ {j} t_ {j}}  
under conditions
Σi=onemaijxi⋅yj+ejtj⋅yj⩾gj⋅yj,one⩽j⩽n{\ displaystyle \ sum _ {i = 1} ^ {m} a_ {ij} x_ {i} \ cdot y_ {j} + e_ {j} t_ {j} \ cdot y_ {j} \ geqslant g_ {j} \ cdot y_ {j}, 1 \ leqslant j \ leqslant n}  
fixi⋅si+Σj=onenbijtj⋅si⩾hi⋅si,one⩽i⩽m{\ displaystyle f_ {i} x_ {i} \ cdot s_ {i} + \ sum _ {j = 1} ^ {n} b_ {ij} t_ {j} \ cdot s_ {i} \ geqslant h_ {i} \ cdot s_ {i}, 1 \ leqslant i \ leqslant m}  
xi⩾0,tj⩾0,one⩽i⩽m,one⩽j⩽n{\ displaystyle x_ {i} \ geqslant 0, \, t_ {j} \ geqslant 0,1 \ leqslant i \ leqslant m, 1 \ leqslant j \ leqslant n}  
yj⩾0,si⩾0,one⩽j⩽n,one⩽i⩽m{\ displaystyle y_ {j} \ geqslant 0, \, s_ {i} \ geqslant 0,1 \ leqslant j \ leqslant n, 1 \ leqslant i \ leqslant m}  

Since this is a minimization problem, we would like to obtain a dual problem, which is the lower boundary of the direct problem. In other words, we would like the sum of all the right-hand sides of the constraints to be maximum under the conditions that for each variable of the direct problem the sum of its coefficients does not exceed the coefficient in a linear function. For example, x 1 appears inn+one {\ displaystyle n + 1}   restrictions. If we sum the constraint coefficients, we getaone,oneyone+aone,2y2+⋯+aone,nyn+fonesone {\ displaystyle a_ {1,1} \ mathbf {y} _ {1} + a_ {1,2} \ mathbf {y} _ {2} + \ dots + a_ {1, n} \ mathbf {y} _ {n} + f_ {1} \ mathbf {s} _ {1}}   . This amount should not exceed c 1 . As a result, we get:

MaximizeΣj=onengjyj+Σi=onemhisi {\ displaystyle \ sum _ {j = 1} ^ {n} g_ {j} y_ {j} + \ sum _ {i = 1} ^ {m} h_ {i} s_ {i}}  
under restrictions
Σj=onenaijyj+fisi⩽ci,one⩽i⩽m{\ displaystyle \ sum _ {j = 1} ^ {n} a_ {ij} y_ {j} + f_ {i} s_ {i} \ leqslant c_ {i}, 1 \ leqslant i \ leqslant m}  
ejyj+Σi=onembijsi⩽dj,one⩽j⩽n{\ displaystyle e_ {j} y_ {j} + \ sum _ {i = 1} ^ {m} b_ {ij} s_ {i} \ leqslant d_ {j}, 1 \ leqslant j \ leqslant n}  
yj⩾0,si⩾0,one⩽j⩽n,one⩽i⩽m{\ displaystyle y_ {j} \ geqslant 0, \, s_ {i} \ geqslant 0,1 \ leqslant j \ leqslant n, 1 \ leqslant i \ leqslant m}  

The calculations above assume that the task is presented in standard form. This assumption does not affect the generality of reasoning, since any linear programming problem can be reduced to a standard form.

Interpretations in real life

The duality theorem has an economic interpretation. If we interpret the direct linear programming problem as the classical “resource allocation” problem, its dual problem can be interpreted as the problem [14] . See article. The economic interpretation of the dual task can also be found in the book of Lungu [15] .

The duality theorem also has a physical interpretation [16] .

Notes

  1. ↑ Sometimes the term Conjugate problem is used , as, for example, in the book of Yudin and Holstein ( Yudin, Holstein 1969 , 149) or in the book of Lungu ( Lungu 2005 , 67). In the second book, the direct task is also called the main task .
  2. ↑ Gartner, Matousek, 2006 , p. 81-104.
  3. ↑ Yudin, Holstein, 1969 , p. 150-152 Clause 5.2.
  4. ↑ Yudin, Holstein, 1969 , p. 157-159 Paragraph 5.5.
  5. ↑ Gartner, Matousek, 2006 , p. 85.
  6. ↑ The proof of a very close statement from which this theorem follows can be found in the book of Yudin and Holstein ( 1969 , 159, Lemma 5.1)
  7. ↑ Gartner, Matousek, 2006 , p. 81–83.
  8. ↑ The proof can be found in the book of Yudin and Holstein, where it is called the “first duality theorem” ( Yudin, Holstein 1969 , 164, Theorem 6.1)
  9. ↑ Gartner, Matousek, 2006 , p. 87–89.
  10. ↑ Gartner, Matousek, 2006 , p. 81–94.
  11. ↑ Yudin, Holstein, 1969 , p. 162, Lemma 5.3.
  12. ↑ AA Ahmadi Lecture 6: linear programming and matching (unspecified) . Princeton University (2016).
  13. ↑ Gartner, Matousek, 2006 , p. sub.8.1.
  14. ↑ In the book of Yudin and Holstein, the term preliminary estimates (production factors) is used for shadow prices.
  15. ↑ Lungu, 2005 , p. 68 Clause 5.4.
  16. ↑ Gartner, Matousek, 2006 , p. 86–87.

Literature

  • Jiri Matousek, Bernd Gärtner. Understanding and Using Linear Programming. - Springer, 2006. - S. 81–104. - (Universitext).
  • Yudin D.B., Holstein E.G. Linear programming (theory, methods and applications). - Moscow: “Science”, 1969.
  • Lungu K.N. Linear programming. Guide to solving problems. - M .: FIZMATLIT, 2005. - ISBN 5-9221-0631-7 .
Source - https://ru.wikipedia.org/w/index.php?title=Dual_problem_of_linear_programing&oldid=99391054


More articles:

  • Tomsk Rubber Footwear Factory
  • USSR Bandy Championship 1978/1979
  • Blokhin, Boris Yakovlevich
  • List of Heads of State in 1592
  • Muhammadiya (Indonesia)
  • March 881
  • Cromford (Ratingen)
  • Matryoshka (Piece)
  • Radio Engineering Forces of the USSR Air Defense
  • Mar del Plata 1954 (chess tournament)

All articles

Clever Geek | 2019