Clever Geek Handbook
📜 ⬆️ ⬇️

Quadratic programming

Quadratic programming ( QP ) is the process of solving a special type of optimization problem, namely, the optimization problem (minimization or maximization) of a quadratic function of several variables under linear constraints on these variables. Quadratic programming is a special case of nonlinear programming .

Content

  • 1 Statement of the problem
  • 2 Solution Methods
    • 2.1 Constraints - Equalities
  • 3 Lagrangian duality
  • 4 Difficulty
  • 5 Solution Packages and Scripting Languages
  • 6 See also
  • 7 notes
  • 8 Literature
  • 9 References

Task

The quadratic programming problem with n variables and m constraints can be formulated as follows [1] .

Given:

  • real n- dimensional vector c ,
  • n × n -dimensional real symmetric matrix Q ,
  • m × n -dimensional real matrix A
  • real m- dimensional vector b ,

The goal of the quadratic programming problem is to search for an n- dimensional vector x , which

minimizesone2xTQx+cTx{\ displaystyle {\ tfrac {1} {2}} \ mathbf {x} ^ {\ mathrm {T}} Q \ mathbf {x} + \ mathbf {c} ^ {\ mathrm {T}} \ mathbf {x }}  
under conditionsAx≤b,{\ displaystyle A \ mathbf {x} \ leq \ mathbf {b},}  

where x T denotes the transposed vector. The notation A x ≤ b means that any element of the vector A x does not exceed the corresponding element of the vector b .

A related programming problem, a has quadratic constraints on variables.

Solution Methods

For common values, various methods are used, among them

  • internal point method
  • [2]
  • [3]
  • conjugate gradient method
  • gradient projection method
  • modification of the simplex method [2] [4]

In the case where Q is positive definite , the problem is a special case of the more general problem.

Constraints - Equalities

The quadratic programming problem is somewhat simpler if Q is positive definite and all the constraints are equalities (no inequalities), since in this case the problem can be reduced to solving a system of linear equations. If we use the Lagrange multipliers and look for the extremum of the Lagrangian, we can easily show that the solution to the problem

minimizeone2xTQx+cTx{\ displaystyle {\ tfrac {1} {2}} \ mathbf {x} ^ {\ mathrm {T}} Q \ mathbf {x} + \ mathbf {c} ^ {\ mathrm {T}} \ mathbf {x }}  
under conditionsBx=d{\ displaystyle B \ mathbf {x} = \ mathbf {d}}  

determined by a system of linear equations

[QBTB0][xλ]=[-cd]{\ displaystyle {\ begin {bmatrix} Q & B ^ {T} \\ B & 0 \ end {bmatrix}} {\ begin {bmatrix} \ mathbf {x} \\\ lambda \ end {bmatrix}} = {\ begin {bmatrix } - \ mathbf {c} \\\ mathbf {d} \ end {bmatrix}}}  

Whereλ {\ displaystyle \ lambda}   - many Lagrange multipliers that appear along with the solutionx {\ displaystyle \ mathbf {x}}   .

The easiest way to solve this system is to solve it by direct methods (for example, using the LU decomposition , which is very convenient for small tasks). For large tasks, some unusual difficulties arise, most noticeable when the task is not positively defined (even ifQ {\ displaystyle Q}   positively defined), which makes it potentially very difficult to find a good mathematical approach and there are many problem-specific approaches .

If the number of constraints is not equal to the number of variables, you can try a relatively simple attack, replacing the variables so that the constraints are unconditionally fulfilled. For example, suppose thatd=0 {\ displaystyle \ mathbf {d} = 0}   (transition to nonzero values ​​is quite simple). Consider the limitations

Bx=0{\ displaystyle B \ mathbf {x} = 0}  

Introduce a new vectory {\ displaystyle \ mathbf {y}}   defined by equality

Zy=x{\ displaystyle Z \ mathbf {y} = \ mathbf {x}}  

Wherey {\ displaystyle \ mathbf {y}}   has dimensionx {\ displaystyle \ mathbf {x}}   minus the number of restrictions. Then

BZy=0{\ displaystyle BZ \ mathbf {y} = 0}  

and if the matrixZ {\ displaystyle Z}   chosen so thatBZ=0 {\ displaystyle BZ = 0}   , equalities in constraints will always be satisfied. Matrix SearchZ {\ displaystyle Z}   implies finding theB {\ displaystyle B}   , which is more or less simple, depending on the structure of the matrixB {\ displaystyle B}   . Substituting the new vector into the initial conditions, we obtain a quadratic problem without restrictions:

one2xTQx+cTx⇒one2yTZTQZy+(ZTc)Ty{\ displaystyle {\ tfrac {1} {2}} \ mathbf {x} ^ {T} Q \ mathbf {x} + \ mathbf {c} ^ {T} \ mathbf {x} \ quad \ Rightarrow \ quad { \ tfrac {1} {2}} \ mathbf {y} ^ {T} Z ^ {T} QZ ​​\ mathbf {y} + (Z ^ {T} \ mathbf {c}) ^ {T} \ mathbf {y }}  

and the solution can be found by solving the equation

ZTQZy=-ZTc{\ displaystyle Z ^ {T} QZ ​​\ mathbf {y} = -Z ^ {T} \ mathbf {c}}  

Under certain restrictions onQ {\ displaystyle Q}   reduced matrixZTQZ {\ displaystyle Z ^ {T} QZ}   will be positively determined. You can write a variant of the conjugate gradient method , in which there is no need to explicitly calculate the matrixZ {\ displaystyle Z}   [5] .

Lagrange Duality

The dual Lagrange problem for quadratic programming is also a quadratic programming problem. To understand this, let us dwell on the casec=0 {\ displaystyle c = 0}   with a positive definite matrix Q. We write out the Lagrange multipliers of the function

L(x,λ)=one2xTQx+λT(Ax-b).{\ displaystyle L (x, \ lambda) = {\ tfrac {1} {2}} x ^ {T} Qx + \ lambda ^ {T} (Ax-b).}  

If we define a (Lagrangian) dual functiong(λ) {\ displaystyle g (\ lambda)}   asg(λ)=infxL(x,λ) {\ displaystyle g (\ lambda) = \ inf _ {x} L (x, \ lambda)}   we are looking for a lower boundL {\ displaystyle L}   using∇xL(x,λ)=0 {\ displaystyle \ nabla _ {x} L (x, \ lambda) = 0}   and positive definiteness of the matrix Q:

x∗=-Q-oneATλ.{\ displaystyle x ^ {*} = - Q ^ {- 1} A ^ {T} \ lambda.}  

Therefore, the dual function is equal to

g(λ)=-one2λTAQ-oneATλ-λTb,{\ displaystyle g (\ lambda) = - {\ tfrac {1} {2}} \ lambda ^ {T} AQ ^ {- 1} A ^ {T} \ lambda - \ lambda ^ {T} b,}  

and the dual Lagrange problem for the original problem will be

minimize-one2λTAQ-oneATλ-λTb{\ displaystyle - {\ tfrac {1} {2}} \ lambda ^ {T} AQ ^ {- 1} A ^ {T} \ lambda - \ lambda ^ {T} b}  
under conditionsλ⩾0{\ displaystyle \ lambda \ geqslant 0}   .

Besides the Lagrange duality theory, there are other dual pairs of problems (for example, ).

Difficulty

For a positive definite matrix Q, the ellipsoid method solves the problem in polynomial time [6] . If, on the other hand, Q is not positive definite, then the problem becomes NP-hard [7] . In fact, even if Q has a unique negative eigenvalue , the problem is NP-hard [8] .

Solution Packages and Scripting Languages

TitleDescription
System for modeling and solving optimization problems and schedules
Library of programs (C ++, .NET) of numerical analysis with double licensing (GPL / proprietary).
AmplA popular modeling language for mathematical optimization of large size.
Modeling and optimization for tasks LP (linear programming), QP (quadratic programming), NLP (non-linear programming), MILP (integer programming), MINLP (mixed integer non-linear programming) and differential -algebraic equations) in MATLAB and Python
An open source computational geometric package that includes a quadratic programming solution system.
CPLEXA popular system for solving problems with APIs (C, C ++, Java, .Net, Python, Matlab and R). Free for academic use.
Excel Solution FinderA system for solving nonlinear problems, adapted for spreadsheets, in which the calculation of functions is based on the value of the cells. The basic version is available as a standard add-on for Excel.
High level simulation system for mathematical optimization
A system for solving problems with parallel algorithms for large-scale linear programming problems, quadratic programming problems, and mixed-integer problems. Free for academic use.
A set of mathematical and statistical functions that a programmer can include in his application.
IPOPT (Interior Point OPTimizer) is a programming package for large nonlinear tasks
Commercial Integrated Package for Nonlinear Optimization
MapleGeneral-purpose programming language for mathematics. To solve quadratic problems, Maple uses the QPSolve command.
MATLABA matrix-oriented general-purpose programming language for numerical computing. To solve quadratic programming problems in MATLAB, in addition to the base MATLAB product, the Optimization Toolbox add-on is required
MathematicaGeneral-purpose programming language for mathematics, including symbolic and numerical capabilities.
The system for solving large optimization problems, includes an API for several languages ​​(C ++, Java, .Net, Matlab and Python)
A set of mathematical and statistical procedures developed by for several programming languages ​​(C, C ++, Fortran, Visual Basic, Java and C #) and packages (MATLAB, Excel, R, LabVIEW). The optimization section of the NAG library includes procedures for quadratic programming problems with rare and dense constraint matrices, as well as procedures for optimizing linear and nonlinear functions, sums of squares of linear and nonlinear functions. The NAG library has procedures for both local and global optimization, as well as integer programming tasks.
A free, Java-based modeling language for optimization and supporting several targeted decision systems. There is a plugin for Eclipse [9] [10]
RUniversal cross-platform computing package with GPL license (see quadprog section of this package). Translated to javascript under the MIT license . Translated to C # under the LGPL license.
/ ORA system for solving linear, integer, combinatorial, nonlinear, non-differentiable problems, problems on networks and optimization in constraints. OPTMODEL and a number of vertical solutions aimed at specific tasks are fully integrated with | .
A system of mathematical modeling and problem solving based on a declarative language based on production rules. The system is commercialized by Universal Technical Systems, Inc. .
It supports global optimization, solving integer programming problems, all types of least squares method, solving linear quadratic programming problems for MATLAB . TOMLAB supports , CPLEX , and solution systems.

See also

  • Support Vector Method
  • Sequential Quadratic Programming
  • Line programming

Notes

  1. ↑ Nocedal, Wright, 2006 , p. 449.
  2. ↑ 1 2 Murty, 1988 , p. xlviii + 629.
  3. ↑ Delbos, Gilbert, 2005 , p. 45–69.
  4. ↑ Kunzi, Crelle, 1965 , p. 161-179.
  5. ↑ Gould, Hribar, Nocedal, 2001 , p. 1376–1395.
  6. ↑ Kozlov, Tarasov, Khachiyan, 1980 , p. 1049-1051.
  7. ↑ Sahni, 1974 , p. 262–279.
  8. ↑ Pardalos, Vavasis, 1991 , p. 15-22.
  9. ↑ Zesch, Hellingrath, 2010 .
  10. ↑ Burkov, Chaib-draa, 2010 .

Literature

  • G.P. Kunzi, W. Crelle. Nonlinear programming. - Moscow: "Soviet Radio", 1965.
  • Jorge Nocedal, Stephen J. Wright. Numerical Optimization. - 2nd. - Berlin, New York: Springer-Verlag , 2006 .-- S. 449. - ISBN 978-0-387-30303-1 .
  • Katta G. Murty. Linear complementarity, linear and nonlinear programming. - Berlin: Heldermann Verlag, 1988. - T. 3. - C. xlviii + 629 pp .. - (Sigma Series in Applied Mathematics). - ISBN 3-88538-403-5 . Archived on April 1, 2010.
  • F. Delbos, J.Ch. Gilbert. Global linear convergence of an augmented Lagrangian algorithm for solving convex quadratic optimization problems // Journal of Convex Analysis. - 2005 .-- T. 12 . - S. 45–69 .
  • Felix Zesch, Bernd Hellingrath. An optimization model for mixed-model assembly lines // Integrated Production-Distribution Planning. - 2010.
  • Andriy Burkov, Brahim Chaib-draa. Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10). - Atlanta, USA: AAAI Press, 2010.
  • Nicholas IM Gould, Mary E. Hribar, Jorge Nocedal. On the Solution of Equality Constrained Quadratic Programming Problems Arising in Optimization. - SIAM Journal of Scientific Computing, 2001. - T. 23 , no. 4 . - S. 1376–1395 .
  • M.K. Kozlov, S.P. Tarasov, L.G. Khachiyan. Polynomial solvability of convex quadratic programming, // Comput. mate. and mat. physical .. - 1980. - T. 20 ,, issue. 5 .
  • S. Sahni. Computationally related problems // SIAM Journal on Computing. - 1974.- T. 3 . - S. 262–279 . - DOI : 10.1137 / 0203021 .
  • Panos M. Pardalos, Stephen A. Vavasis. Quadratic programming with one negative eigenvalue is NP-hard // Journal of Global Optimization. - 1991. - T. 1 , no. 1 . - S. 15–22 . - DOI : 10.1007 / bf00120662 .
  • Richard W. Cottle, Jong-Shi Pang, Richard E. Stone. The linear complementarity problem. - Boston, MA: Academic Press, Inc., 1992. - C. xxiv + 762 pp .. - (Computer Science and Scientific Computing). - ISBN 0-12-192350-9 .
  • Michael R. Garey, David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. - WH Freeman, 1979. - ISBN 0-7167-1045-5 . A6: MP2, pg. 245.
  • Nicholas IM Gould, Philippe L. Toint. A Quadratic Programming Bibliography (Neopr.) (PDF). RAL Numerical Analysis Group Internal Report 2000-1 (2000).

Links

  • A page about QP
  • NEOS Optimization Guide: Quadratic Programming
  • Solve an example Quadratic Programming (QP) problem
Source - https://ru.wikipedia.org/w/index.php?title= Quadratic_ programming&oldid = 97532544


More articles:

  • Swazetec, Wenceslas
  • Kamijou Yuji
  • Domestic policy of the Transnistrian Moldavian Republic
  • Chervotkin, Alexey Alexandrovich
  • It's a Heartache
  • Shaumyan, Sebastian Konstantinovich
  • Brul-i-Vignoles, Joan
  • Cross-country skiing at the 2017 World Ski Championships - 4 × 10 km relay (men)
  • Tatfondbank
  • Newfeld Danny

All articles

Clever Geek | 2019