Clever Geek Handbook
📜 ⬆️ ⬇️

Line programming

Linear programming is a mathematical discipline devoted to the theory and methods of solving extreme problems on setsn {\ displaystyle n} n -dimensional vector space defined by systems of linear equations and inequalities.

Linear programming (LP) is a special case of convex programming , which in turn is a special case of mathematical programming . At the same time, it is the basis of several methods for solving integer and nonlinear programming problems. One of the generalizations of linear programming is linear fractional programming .

Many properties of linear programming problems can also be interpreted as properties of polyhedra and thus formulate and prove them geometrically.

History

Mathematical studies of individual economic problems, mathematical formalization of numerical material was carried out as far back as the 19th century . In the mathematical analysis of the extended production process, algebraic relations were used, their analysis was carried out using differential calculus . This made it possible to get a general idea of ​​the problem.

Economic development required quantitative indicators, and in the 1920s an intersectoral balance ( IOB ) was created. It was he who prompted the creation and study of mathematical models. The development of the IOB in 1924-1925 in the USSR influenced the work of the economist and statistician Vasily Vasilyevich Leontiev . He developed an intersectoral model of production and distribution of products.

In 1938, Leonid Vitalievich Kantorovich, in the manner of scientific consultation, began to study the purely practical task of drawing up the best plan for loading peeling machines (plywood trust). This task did not succumb to conventional methods. It became clear that the task was not random. [one]

In 1939, Leonid Kantorovich published the work “Mathematical Methods of Organization and Production Planning”, in which he formulated a new class of extreme problems with constraints and developed an effective method for solving them, thus laying the foundations of linear programming.

The study of such problems led to the creation of a new scientific discipline of linear programming and opened a new stage in the development of economic and mathematical methods.

In 1949, American mathematician George Bernard Danzig developed an effective method for solving linear programming problems (ZLP) - the simplex method . [one]

The term "programming" must be understood in the sense of "planning" (one of the translations of English. Programming ). It was proposed in the mid-1940s by George Danzig, one of the founders of linear programming, even before computers were used to solve linear optimization problems .

The method of internal points was first mentioned by I. I. Dikin in 1967 . [2]

Tasks

A general (standard) linear programming problem is the problem of finding the minimum of a linear objective function (linear form) of the form [3] :

f(x)=∑j=onencjxj=conexone+c2x2+...+cnxn.{\ displaystyle f (x) = \ sum _ {j = 1} ^ {n} c_ {j} x_ {j} = c_ {1} x_ {1} + c_ {2} x_ {2} + \ ldots + c_ {n} x_ {n}.}  

The problem in which inequality constraints appear is called the main linear programming problem (OZLP)

∑j=onenaijxj⩾bi,(i=one,2,...,m){\ displaystyle \ sum _ {j = 1} ^ {n} a_ {ij} x_ {j} \ geqslant b_ {i}, \ quad (i = 1, \; 2, \; \ ldots, \; m) }  
xj⩾0.(j=one,2,...,n){\ displaystyle x_ {j} \ geqslant 0. \ quad (j = 1, \; 2, \; \ ldots, \; n)}  

The linear programming problem will have a canonical form if, in the main problem, instead of the first system of inequalities, there is a system of equations with restrictions in the form of equality [4] :

∑j=onenaijxj=bi.(i=one,2,...,m){\ displaystyle \ sum _ {j = 1} ^ {n} a_ {ij} x_ {j} = b_ {i}. \ quad (i = 1, \; 2, \; \ ldots, \; m)}  

The main problem can be reduced to the canonical one by introducing additional variables.

Linear programming problems of the most general form (problems with mixed constraints: equalities and inequalities, the presence of variables free of constraints) can be reduced to equivalent (having the same set of solutions) substitutions of variables and replacing equalities with a pair of inequalities [5] .

It is easy to see that the problem of finding the maximum can be replaced by the problem of finding the minimum, takingc {\ displaystyle c}   with the opposite sign.

Examples of tasks

Maximum Matching

Consider the problem of maximum matching in a bipartite graph : there are several boys and girls, and for each boy and girl it is known whether they are cute to each other. You need to marry the maximum number of couples with mutual sympathy.

Introduce variablesxij {\ displaystyle x_ {ij}}   which correspond to a pair ofi {\ displaystyle i}   young men andj {\ displaystyle j}   -th girls and satisfy the restrictions:

0⩽xij⩽one,{\ displaystyle 0 \ leqslant x_ {ij} \ leqslant 1,}  
xonej+x2j+...+xnj⩽one,{\ displaystyle x_ {1j} + x_ {2j} + \ ldots + x_ {nj} \ leqslant 1,}  
xione+xi2+...+xim⩽one,{\ displaystyle x_ {i1} + x_ {i2} + \ ldots + x_ {im} \ leqslant 1,}  

with objective functionf=celevenxeleven+c12x12+...+cnmxnm {\ displaystyle f = c_ {11} x_ {11} + c_ {12} x_ {12} + \ ldots + c_ {nm} x_ {nm}}   wherecij {\ displaystyle c_ {ij}}   equal to 1 or 0 depending on whether they are prettyi {\ displaystyle i}   young man andj {\ displaystyle j}   1st girl to each other. It can be shown that among the optimal solutions to this problem there is an integer. Variables equal to 1 will correspond to the pairs to be married.

Max Flow

Let there be a graph (with oriented edges) in which its throughput is indicated for each edge. And two peaks are given: drain and source. It is necessary to indicate for each rib how many fluids will flow through it (not more than its capacity) so as to maximize the total flow from the source to the drain (the liquid cannot appear or disappear at all vertices except the source and drain, respectively).

Take as variablesxi {\ displaystyle x_ {i}}   - the amount of fluid flowing throughi {\ displaystyle i}   th rib. Then

0⩽xi⩽ci,{\ displaystyle 0 \ leqslant x_ {i} \ leqslant c_ {i},}  

Whereci {\ displaystyle c_ {i}}   - bandwidthi {\ displaystyle i}   ribs. These inequalities must be supplemented by the equality of the amount of flowing in and outflowing liquid for each vertex except the drain and source. As a functionf {\ displaystyle f}   It’s natural to take the difference between the amount of leaking and leaking fluid at the source.

A generalization of the previous problem is the maximum flow of minimum cost. In this problem, values ​​are given for each edge and you need to choose a stream with a minimum cost among the maximum flows. This problem comes down to two linear programming problems: first you need to solve the maximum flow problem, and then add a restriction to this problemf(x)⩾m {\ displaystyle f (x) \ geqslant m}   wherem {\ displaystyle m}   - value of the maximum flow, and solve the problem with a new functionf(x) {\ displaystyle f (x)}   - cost of flow.

These problems can be solved faster than with general algorithms for solving linear programming problems, due to the special structure of equations and inequalities.

Transport Problem

There is some uniform cargo that needs to be transported fromn {\ displaystyle n}   warehouses onm {\ displaystyle m}   factories. For each warehousei {\ displaystyle i}   it is known how much cargo is in itai {\ displaystyle a_ {i}}   , and for each plant its need is knownbj {\ displaystyle b_ {j}}   in the cargo. The cost of transportation is proportional to the distance from the warehouse to the factory (all distancescij {\ displaystyle c_ {ij}}   fromi {\ displaystyle i}   warehouse toj {\ displaystyle j}   factory known). It is required to draw up the cheapest transportation plan.

The decisive variables in this case arexij {\ displaystyle x_ {ij}}   - the amount of cargo transported fromi {\ displaystyle i}   warehouse inj {\ displaystyle j}   th factory. They satisfy the restrictions:

xione+xi2+...+xim⩽ai,{\ displaystyle x_ {i1} + x_ {i2} + \ ldots + x_ {im} \ leqslant a_ {i},}  
xonej+x2j+...+xnj⩾bj.{\ displaystyle x_ {1j} + x_ {2j} + \ ldots + x_ {nj} \ geqslant b_ {j}.}  

The objective function has the form:f(x)=xelevenceleven+x12c12+...+xnmcnm {\ displaystyle f (x) = x_ {11} c_ {11} + x_ {12} c_ {12} + \ ldots + x_ {nm} c_ {nm}}   which must be minimized.

Zero-sum game

There is a matrixA {\ displaystyle A}   the sizen×m {\ displaystyle n \ times m}   . The first player selects a number from 1 ton {\ displaystyle n}   second - from 1 tom {\ displaystyle m}   . Then they check the numbers and the first player getsaij {\ displaystyle a_ {ij}}   points and the second(-aij) {\ displaystyle (-a_ {ij})}   points (i {\ displaystyle i}   - the number chosen by the first player,j {\ displaystyle j}   - second). You need to find the optimal strategy for the first player.

Let in the optimal strategy, for example, the first player the numberi {\ displaystyle i}   need to choose with probabilitypi {\ displaystyle p_ {i}}   . Then the optimal strategy is the solution to the following linear programming problem:

0⩽pi⩽one,{\ displaystyle 0 \ leqslant p_ {i} \ leqslant 1,}  
pone+...+pn=one,{\ displaystyle p_ {1} + \ ldots + p_ {n} = 1,}  
aoneipone+a2ip2+...+anipn⩾c(i=one,...,m),{\ displaystyle a_ {1i} p_ {1} + a_ {2i} p_ {2} + \ ldots + a_ {ni} p_ {n} \ geqslant c \ quad (i = 1, \; \ ldots, \; m ),}  

in which you need to maximize the functionf(pone,...,pn,c)=c {\ displaystyle f (p_ {1}, \; \ ldots, \; p_ {n}, \; c) = c}   . Valuec {\ displaystyle c}   in the optimal solution is the mathematical expectation of winning the first player in the worst case.

Decision Algorithms

The most famous and widely used in practice to solve the general problem of linear programming (LP) is the simplex method . Despite the fact that the simplex method is a fairly efficient algorithm that has shown good results in solving applied LP problems, it is an algorithm with exponential complexity . The reason for this is the combinatorial nature of the simplex method, which sequentially goes over the vertices of the polyhedron of feasible solutions when searching for the optimal solution.

The first polynomial algorithm , the ellipsoid method , was proposed in 1979 by the Soviet mathematician L. Khachiyan , thus solving the problem that remained unsolved for a long time. The ellipsoid method has a completely different non-combinatorial nature than the simplex method. However, from a computational point of view, this method turned out to be unpromising. Nevertheless, the very fact of the polynomial complexity of the problems led to the creation of a whole class of effective LP algorithms - internal point methods , the first of which was the N. Karmarkar algorithm , proposed in 1984 . Algorithms of this type use a continuous interpretation of the LP problem when, instead of enumerating the vertices of the polyhedron, solutions of the LP problem search along trajectories in the space of the variables of the problem that do not pass through the vertices of the polyhedron. The method of internal points, which, unlike the simplex method, bypasses points from the inside of the region of admissible values, uses the methods of the logarithmic barrier functions of nonlinear programming developed in the 1960s by Fiacco and McCormick.

Dual linear programming problems

Each linear programming problem [6] of the form

∑j=onenaijxj⩽ci,i=one,m¯;{\ displaystyle \ sum _ {j = 1} ^ {n} a_ {ij} x_ {j} \ leqslant c_ {i}, \; i = {\ overline {1, \; m}};}  
xj⩾0,j=one,n¯;{\ displaystyle x_ {j} \ geqslant 0, \; j = {\ overline {1, \; n}};}  
F(x)=∑j=onenbjxj→max{\ displaystyle F (x) = \ sum _ {j = 1} ^ {n} b_ {j} x_ {j} \ to \ max}  

it is possible to correlate in some way some other linear programming problem called dual or conjugate with respect to the original or direct. The connection between the original and dual problems lies mainly in the fact that the solution of one of them can be obtained directly from the solution of the other. Let us give a definition of the dual problem with respect to the original linear programming problem

Initial taskDual task
∑j=onenaijxj⩽ci,i=one,m¯{\ displaystyle \ sum _ {j = 1} ^ {n} a_ {ij} x_ {j} \ leqslant c_ {i}, \; i = {\ overline {1, \; m}}}  yi⩾0,i=one,m¯{\ displaystyle y_ {i} \ geqslant 0, \; i = {\ overline {1, \; m}}}  
xj⩾0,j=one,n¯{\ displaystyle x_ {j} \ geqslant 0, \; j = {\ overline {1, \; n}}}  ∑i=onemaijyi⩾bj,j=one,n¯{\ displaystyle \ sum _ {i = 1} ^ {m} a_ {ij} y_ {i} \ geqslant b_ {j}, \; j = {\ overline {1, \; n}}}  
F(x)=∑j=onenbjxj→max{\ displaystyle F (x) = \ sum _ {j = 1} ^ {n} b_ {j} x_ {j} \ to \ max}  T(y)=∑i=onemciyi→min{\ displaystyle T (y) = \ sum _ {i = 1} ^ {m} c_ {i} y_ {i} \ to \ min}  

If the vectorsx {\ displaystyle x}   andy {\ displaystyle y}   Are admissible solutions of the direct and dual problems, thenF(x)⩽T(y) {\ displaystyle F (x) \ leqslant T (y)}   , and equality is achieved if and only ifx {\ displaystyle x}   andy {\ displaystyle y}   - optimal solutions. If the objective function of one of the pair of dual problems is not bounded (for the original, from above, for the dual, from below), then the range of feasible solutions to another problem is empty.

If the vectorsx∗ {\ displaystyle x ^ {*}}   andy∗ {\ displaystyle y ^ {*}}   Are optimal solutions of the direct and dual problems, respectively, then the following equalities are true

xj∗(∑i=onemaijyi∗-bj)=0,j=one,n¯;{\ displaystyle x_ {j} ^ {*} (\ sum _ {i = 1} ^ {m} a_ {ij} y_ {i} ^ {*} - b_ {j}) = 0, \; j = { \ overline {1, \; n}};}  
yi∗(∑j=onenaijxj∗-ci)=0,i=one,m¯.{\ displaystyle y_ {i} ^ {*} (\ sum _ {j = 1} ^ {n} a_ {ij} x_ {j} ^ {*} - c_ {i}) = 0, \; i = { \ overline {1, \; m}}.}  

That is, for optimal solutions to the direct and dual problems, non-stressed (strict inequality holds) constraints correspond to zero variables, and non-zero variables (included in the reference plan) correspond to strained (non-strict inequality implemented as equality) constraints. But there may be zero variables corresponding to tense constraints.

These properties of dual solutions can significantly reduce the solution time if you have to solve a problem with a number of restrictions much greater than the number of variables. Then it is possible, having solved the dual problem, to find its support plan, after which, having selected only the constraints corresponding to the support plan in the direct problem (all these constraints must be strained), they can solve the usual system of linear equations for them.

Software

lp_solve - open source software (LGPL GNU license. GNU General Public License for Libraries ) for solving linear equations. LpSolve has an IDE , its own C API, and many other programming interfaces for Java, AMPL, MATLAB, Wolfram Mathematica, O-Matrix, Sysquake, Scilab, Octave, FreeMat, Euler, Python, Sage, PHP, R, and the Microsoft Solver Foundation.

See also

  • Nonlinear programming
  • Danzig Algorithm
  • Graphic method for solving the linear programming problem
  • Fractional Linear Programming
  • Cost-Release Model

Notes

  1. ↑ 1 2 Source: Altai Regional Universal Scientific Library named after V. Ya. Shishkova (AKUNB) . Optimization Methods: Textbook. allowance. Brazovskaya N.V .; Altai State Technical University named after I. I. Polzunova, [Center dist. training]. - Barnaul: Publishing House of Altai State Technical University, 2000. - 120 p. - ISBN 5-BNV-MOr. 9.00 - UDC / BBK 22.183.4 B871.
  2. ↑ I. Dikin, An Iterative Solution of Linear and Quadratic Programming Problems, Dokl. USSR Academy of Sciences. - 1967. - T. 174 , No. 4 . - S. 747-748 .
  3. ↑ Karmanov, 1986 , p. 63.
  4. ↑ Karmanov, 1986 , p. 80.
  5. ↑ Karmanov, 1986 , p. 77.
  6. ↑ Electronic textbook "Economic and mathematical methods." Duality in linear programming

Literature

  • Abramov L.M., Kapustin V.F. Mathematical programming. - Tutorial. - L .: Leningrad State University, 1981. - 328 p.
  • Akof R., Sasieni M. Fundamentals of operations research. - Per. from English V. Ya. Altayev., Ed. I.A. Ushakova. - M .: Mir, 1971. - 551 p.
  • Akulich I. L. Chapter 1. Problems of linear programming, Chapter 2. Special problems of linear programming // Mathematical programming in examples and problems. - M .: Higher school, 1986.- 319 p. - ISBN 5-06-002663-9 .
  • Astafiev N.N. Infinite systems of linear inequalities in mathematical programming. - M .: Nauka, 1991 .-- 134 p.
  • Ashmanov S.A., Timokhov A.V. Optimization theory in tasks and exercises. - M .: Nauka, 1991 .-- 446 p.
  • Gass S. Linear programming. - M .: Physical and mathematical literature, 1961. - 300 p.
  • Davydov E.G. Operations Research. - M .: High School, 1990. - 382 p.
  • Degtyarev Yu. I. Operations Research. - Textbook for universities. - M .: Higher school, 1986. - 320 p.
  • Zukhovitsky S.I., Avdeeva L.I. Linear and convex programming. - M .: Nauka, 1966 .-- 348 p.
  • Karmanov V.G. Mathematical programming. - 3rd edition. - M .: Nauka, 1986 .-- 288 p.
  • Kuznetsov A.V., Sakovich V.A., Kholod N.I. Higher mathematics. Mathematical programming. - Minsk .: Higher School, 1994. - 286 p.
  • Thomas H. Cormen et al. Chapter 29. Linear Programming // Algorithms: Construction and Analysis = Introduction to Algorithms. - 2nd ed. - M .: "Williams" , 2006. - S. 1296. - ISBN 5-8459-0857-4 .
  • Yudin D. B. , Holstein E. G. Linear programming. - M .: Nauka, 1969 .-- 424 p.
  • Danzig JB Memoirs of the beginning of linear programming.

Links

  • Linear Program Solver (LiPS) - A free optimization package designed to solve linear, integer and target programming problems.
  • Vershik A. M. O L. V. Kantorovich and linear programming
  • Linear Slides
  • Bolshakova I.V., Kuralenko M.V. Linear programming. Training manual for the test . (unavailable link from 13-05-2013 [2286 days])
  • Barsov A. S. What is linear programming // Popular Lectures in Mathematics , Gostekhizdat, 1959.
  • Sluggish MN Linear inequalities and combinatorics . - ICMMO , 2003.
Source - https://ru.wikipedia.org/w/index.php?title= Linear_programing &oldid = 98055716


More articles:

  • Zhevakhov, Philip Semenovich
  • Kutafin, Oleg Emelyanovich
  • Armenian Literature
  • Orange Juice
  • Symphony No. 3 (Guretsky)
  • Causing harm in the detention of the person who committed the crime
  • Batista Sergio Daniel
  • Berkovits, David
  • Wall, Wiebeck
  • Hayden, Felix Won

All articles

Clever Geek | 2019