Clever Geek Handbook
📜 ⬆️ ⬇️

Convex programming

Convex programming is a subdomain of mathematical optimization that studies the problem of minimizing convex functions on convex sets . While many classes of convex programming problems admit polynomial time algorithms [1] , mathematical optimization in the general case is NP-hard [2] [3] [4] .

Convex programming is used in a number of disciplines, such as automatic control systems , signal estimation and processing , communication and networks, circuitry [5] , data analysis and modeling, finance , statistics ( ) [6] and [7] . The development of computer technology and optimization algorithms has made convex programming almost as simple as linear programming [8] .

Content

Definition

The convex programming problem is an optimization problem in which the objective function is a convex function and the domain of feasible solutions is convex . Functionf {\ displaystyle f}   displaying some subsetRn {\ displaystyle \ mathbb {R} ^ {n}}   atR∪{±∞} {\ displaystyle \ mathbb {R} \ cup \ {\ pm \ infty \}}   is convex if the domain of definition is convex and for allθ∈[0,one] {\ displaystyle \ theta \ in [0,1]}   and allx,y {\ displaystyle x, y}   in their area of ​​definitionf(θx+(one-θ)y)⩽θf(x)+(one-θ)f(y) {\ displaystyle f (\ theta x + (1- \ theta) y) \ leqslant \ theta f (x) + (1- \ theta) f (y)}   . The set is convex if, for all its elementsx,y {\ displaystyle x, y}   and anyθ∈[0,one] {\ displaystyle \ theta \ in [0,1]}   alsoθx+(one-θ)y {\ displaystyle \ theta x + (1- \ theta) y}   belongs to the multitude.

In particular, the problem of convex programming is the problem of finding somex∗∈ C {\ displaystyle \ mathbf {x ^ {\ ast}} \ in C}   on which is achieved

inf{f(x):x∈C}{\ displaystyle \ inf \ {f (\ mathbf {x}): \ mathbf {x} \ in C \}}   ,

where is the objective functionf {\ displaystyle f}   convex, like many admissible solutionsC {\ displaystyle C}   [9] [10] . If such a point exists, it is called the optimal point . The set of all optimal points is called the optimal set . If af {\ displaystyle f}   not limited toC {\ displaystyle C}   or the infimum is not reached, they say that optimization is not limited . IfC {\ displaystyle C}   empty, they say about an unacceptable task [11] .

Standard form

They say that the convex programming problem is presented in standard form if it is written as

Minimizef(x) {\ displaystyle f (\ mathbf {x})}  
Under conditions
gi(x)⩽0,i=one,...,mhi(x)=0,i=one,...,p,{\ displaystyle {\ begin {aligned} && g_ {i} (\ mathbf {x}) \ leqslant 0, \ quad i = 1, \ dots, m \\ && h_ {i} (\ mathbf {x}) = 0, \ quad i = 1, \ dots, p, \ end {aligned}}}  

Wherex∈Rn {\ displaystyle x \ in \ mathbb {R} ^ {n}}   is a variable optimization functionf,gone,...,gm {\ displaystyle f, g_ {1}, \ ldots, g_ {m}}   convex, and functionshone,...,hp {\ displaystyle h_ {1}, \ ldots, h_ {p}}   affinity [11] .

In these terms, the functionf {\ displaystyle f}   is the objective function of the task, and the functionsgi {\ displaystyle g_ {i}}   andhi {\ displaystyle h_ {i}}   referred to as constraint functions. An admissible set of solutions to an optimization problem is a set consisting of all pointsx∈Rn {\ displaystyle x \ in \ mathbb {R} ^ {n}}   satisfying the conditionsgone(x)⩽0,...,gm(x)⩽0 {\ displaystyle g_ {1} (x) \ leqslant 0, \ ldots, g_ {m} (x) \ leqslant 0}   andhone(x)=0,...,hp(x)=0 {\ displaystyle h_ {1} (x) = 0, \ ldots, h_ {p} (x) = 0}   . This set is convex, since the convex function are convex, affine sets are also convex, and the intersection of convex sets is a convex set [12] .

Many optimization tasks can be reduced to this standard form. For example, the problem of maximizing a concave functionf {\ displaystyle f}   can be reformulated equivalently as the problem of minimizing a convex function-f {\ displaystyle -f}   , so the problem of maximizing a concave function on a convex set is often referred to as the problem of convex programming

Properties

Useful properties of convex programming problems [13] [11] :

  • any local minimum is a global minimum ;
  • the optimal set is convex;
  • if the objective function is strongly convex, the problem has a maximum of one optimal point.

These results are used in the theory of convex minimization together with geometric concepts from functional analysis (on Hilbert spaces ), such as theorem, the support hyperplane theorem , and the Farkash lemma .

Examples

 
Hierarchy of convex programming tasks.
(LP: linear programming,
QP: quadratic programming,
SOCP: conical programming on a second-order cone,
SDP: semi-defined programming,
CP: conical programming,
GFP: programming graphic forms.)

The following classes of problems are convex programming problems or can be reduced to convex programming problems by simple transformations [11] [14] :

  • Least square method
  • Line programming
  • Convex quadratic optimization with linear constraints
  • Geometric programming
  • Semi-defined programming
  • with suitable constraints

Lagrange multiplier method

Consider a convex minimization problem defined in a standard form with a price functionf(x) {\ displaystyle f (x)}   and inequality constraintsgi(x)⩽0 {\ displaystyle g_ {i} (x) \ leqslant 0}   forone⩽i⩽m {\ displaystyle 1 \ leqslant i \ leqslant m}   . Then the domainX {\ displaystyle {\ mathcal {X}}}   is equal to:

X={x∈X|gone(x),...,gm(x)⩽0}.{\ displaystyle {\ mathcal {X}} = \ left \ {x \ in X \ vert g_ {1} (x), \ ldots, g_ {m} (x) \ leqslant 0 \ right \}.}  

Lagrange function for the task

L(x,λ0,λone,...,λm)=λ0f(x)+λonegone(x)+⋯+λmgm(x).{\ displaystyle L (x, \ lambda _ {0}, \ lambda _ {1}, \ ldots, \ lambda _ {m}) = \ lambda _ {0} f (x) + \ lambda _ {1} g_ {1} (x) + \ cdots + \ lambda _ {m} g_ {m} (x).}  

For any pointx {\ displaystyle x}   ofX {\ displaystyle X}   which minimizesf {\ displaystyle f}   onX {\ displaystyle X}   , there are real numbersλ0,λone,...,λm, {\ displaystyle \ lambda _ {0}, \ lambda _ {1}, \ ldots, \ lambda _ {m},}   , called Lagrange multipliers , for which conditions are simultaneously satisfied:

  1. x{\ displaystyle x}   minimizesL(y,λ0,λone,...,λm) {\ displaystyle L (y, \ lambda _ {0}, \ lambda _ {1}, \ ldots, \ lambda _ {m})}   above ally∈X, {\ displaystyle y \ in X,}  
  2. λ0,λone,...,λm⩾0,{\ displaystyle \ lambda _ {0}, \ lambda _ {1}, \ ldots, \ lambda _ {m} \ geqslant 0,}   with at least oneλk>0, {\ displaystyle \ lambda _ {k}> 0,}  
  3. λonegone(x)=⋯=λmgm(x)=0{\ displaystyle \ lambda _ {1} g_ {1} (x) = \ cdots = \ lambda _ {m} g_ {m} (x) = 0}   (complementary nonrigidity).

If there is a “strong allowable point”, that is, a pointz {\ displaystyle z}   satisfying

gone(z),...,gm(z)<0,{\ displaystyle g_ {1} (z), \ ldots, g_ {m} (z) <0,}  

then the statement above can be strengthened to the requirementλ0=one {\ displaystyle \ lambda _ {0} = 1}   .

And vice versa, if somex {\ displaystyle x}   ofX {\ displaystyle X}   satisfies conditions (1) - (3) for scalarsλ0,...,λm {\ displaystyle \ lambda _ {0}, \ ldots, \ lambda _ {m}}   withλ0=one {\ displaystyle \ lambda _ {0} = 1}   thenx {\ displaystyle x}   definitely minimizesf {\ displaystyle f}   onX {\ displaystyle X}   .

Algorithms

Convex programming problems are solved by the following modern methods: [15]

  • (Wolf, Lemerical, Kivel),
  • Methods (Pole),
  • The internal point method [1] , using self-consistent barrier functions [16] and self-regulatory barrier functions [17] .
  • Secant Plane Method
  • Ellipsoid method

Subgradient methods can be implemented simply, because they are widely used [18] [19] . Dual subgradient methods are subgradient methods applied to a dual problem . method is similar to the dual subgradient method, but uses the time average of the main variables.

Extensions

Convex programming extensions include optimizing , pseudoconvex, and quasiconvex functions. Extensions of the theory of convex analysis and iterative methods for the approximate solution of non-convex optimization problems are found in the field of generalized convexity , known as abstract convex analysis.

See also

  • Duality
  • Terms Karusha - Kuna - Tucker
  • Optimization (math)

Notes

  1. ↑ 1 2 Nesterov, Nemirovskii, 1994 .
  2. ↑ Murty, Kabadi, 1987 , p. 117–129.
  3. ↑ Sahni, 1974 , p. 262-279.
  4. ↑ Pardalos, Vavasis, 1991 , p. 15-22.
  5. ↑ Boyd, Vandenberghe, 2004 , p. 17.
  6. ↑ Christensen, Klarbring, 2008 , p. chpt. four.
  7. ↑ Boyd, Vandenberghe, 2004 .
  8. ↑ Boyd, Vandenberghe, 2004 , p. eight.
  9. ↑ Hiriart-Urruty, Lemaréchal, 1996 , p. 291.
  10. ↑ Ben-Tal, Nemirovskiĭ, 2001 , p. 335–336.
  11. ↑ 1 2 3 4 Boyd, Vandenberghe, 2004 , p. chpt. four.
  12. ↑ Boyd, Vandenberghe, 2004 , p. chpt. 2.
  13. ↑ Rockafellar, 1993 , p. 183–238.
  14. ↑ Agrawal, Verschueren, Diamond, Boyd, 2018 , p. 42-60.
  15. ↑ For methods of convex programming, see the books of Irriart-Urruti and Lemerical (several books) and the books of Rushczynski, Bercekas , as well as Boyd and Vanderberg (internal point methods).
  16. ↑ Nesterov, Nemirovskii, 1995 .
  17. ↑ Peng, Roos, Terlaky, 2002 , p. 129–171.
  18. ↑ Bertsekas, 2009 .
  19. ↑ Bertsekas, 2015 .

Literature

  • Jean-Baptiste Hiriart-Urruty, Claude Lemaréchal. Convex analysis and minimization algorithms: Fundamentals . - 1996. - ISBN 9783540568506 .
  • Aharon Ben-Tal, Arkadiĭ Semenovich Nemirovskiĭ. Lectures on modern convex optimization: analysis, algorithms, and engineering applications . - 2001. - ISBN 9780898714913 .
  • Katta Murty, Santosh Kabadi. Some NP-complete problems in quadratic and nonlinear programming // Mathematical Programming. - 1987. - T. 39 , no. 2 . - S. 117–129 . - DOI : 10.1007 / BF02592948 .
  • Sahni S. Computationally related problems // SIAM Journal on Computing. - 1974. - Vol. 3 .
  • Panos M. Pardalos, Stephen A. Vavasis. Quadratic programming with one negative eigenvalue is NP-hard // Journal of Global Optimization. - 1991. - T. 1 , No. 1 .
  • R. Tyrrell Rockafellar. Convex analysis. - Princeton: Princeton University Press, 1970.
  • R. Tyrrell Rockafellar. Lagrange multipliers and optimality // SIAM Review. - 1993. - T. 35 , no. 2 . - DOI : 10.1137 / 1035044 .
  • Akshay Agrawal, Robin Verschueren, Steven Diamond, Stephen Boyd. A rewriting system for convex optimization problems // Control and Decision. - 2018 .-- T. 5 , no. 1 . - DOI : 10.1080 / 23307706.2017.1397554 .
  • Yurii Nesterov, Arkadii Nemirovskii. Interior-Point Polynomial Algorithms in Convex Programming. - Society for Industrial and Applied Mathematics, 1995. - ISBN 978-0898715156 .
  • Yurii Nesterov, Arkadii Nemirovskii. Interior Point Polynomial Methods in Convex Programming. - SIAM, 1994. - T. 13. - (Studies in Applied and Numerical Mathematics). - ISBN 978-0-89871-319-0 .
  • Yurii Nesterov. Introductory Lectures on Convex Optimization. - Boston, Dordrecht, London: Kluwer Academic Publishers, 2004 .-- T. 87. - (Applied Optimization). - ISBN 1-4020-7553-7 .
  • Jiming Peng, Cornelis Roos, Tamás Terlaky. Self-regular functions and new search directions for linear and semidefinite optimization // Mathematical Programming. - 2002. - T. 93 , no. 1 . - ISSN 0025-5610 . - DOI : 10.1007 / s101070200296 .
  • Dimitri P. Bertsekas, Angelia Nedic, Asuman Ozdaglar. Convex Analysis and Optimization. - Athena Scientific, 2003. - ISBN 978-1-886529-45-8 .
  • Dimitri P. Bertsekas. Convex Optimization Theory. - Belmont, MA .: Athena Scientific, 2009 .-- ISBN 978-1-886529-31-1 .
  • Dimitri P. Bertsekas. Convex Optimization Algorithms. - Belmont, MA .: Athena Scientific, 2015 .-- ISBN 978-1-886529-28-1 .
  • Stephen P. Boyd, Lieven Vandenberghe. Convex Optimization . - Cambridge University Press, 2004. - ISBN 978-0-521-83378-3 .
  • Jonathan M. Borwein, Adrian Lewis. Convex Analysis and Nonlinear Optimization. - Springer, 2000. - (CMS Books in Mathematics). - ISBN 0-387-29570-4 .
  • Peter W. Christensen, Anders Klarbring. An introduction to structural optimization. - Springer Science & Businees Media, 2008 .-- T. 153. - ISBN 9781402086663 .
  • Jean-Baptiste Hiriart-Urruty, Claude Lemaréchal. Fundamentals of Convex analysis. - Berlin: Springer, 2004 .-- (Grundlehren text editions). - ISBN 978-3-540-42205-1 .
  • Jean-Baptiste Hiriart-Urruty, Claude Lemaréchal. Convex analysis and minimization algorithms, Volume I: Fundamentals. - Berlin: Springer-Verlag, 1993 .-- T. 305 .-- C. xviii + 417. - ISBN 978-3-540-56850-6 .
  • Jean-Baptiste Hiriart-Urruty, Claude Lemaréchal. Convex analysis and minimization algorithms, Volume II: Advanced theory and bundle methods. - Berlin: Springer-Verlag, 1993 .-- T. 306. - C. xviii + 346. - ISBN 978-3-540-56852-0 .
  • Krzysztof C. Kiwiel. Methods of Descent for Nondifferentiable Optimization. - New York: Springer-Verlag, 1985. - (Lecture Notes in Mathematics). - ISBN 978-3-540-15642-0 .
  • Claude Lemaréchal. Lagrangian relaxation // Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May 15–19, 2000. - Berlin: Springer-Verlag, 2001. - T. 2241. - P. 112–156. - ISBN 978-3-540-42877-0 . - DOI : 10.1007 / 3-540-45586-8_4 .
  • Andrzej Ruszczyński. Nonlinear Optimization. - Princeton University Press, 2006.
  • Kamenev GK Optimal adaptive methods for polyhedral approximation of convex bodies. M.: VTs RAS, 2007, 230 p. ISBN 5-201-09876-2 .
  • Kamenev GK Numerical study of the effectiveness of the methods of polyhedral approximation of convex bodies. M: Publ. Computing Center of the Russian Academy of Sciences, 2010, 118 p. ISBN 978-5-91601-043-5 .

Links

  • Stephen Boyd, Lieven Vandenberghe, Convex optimization (pdf)
  • EE364a: Convex Optimization I , EE364b: Convex Optimization II , Oxford University Course Page
  • 6.253: Convex Analysis and Optimization , MIT OCW Course Page
  • Brian Borchers, An overview of software for convex optimization
Source - https://ru.wikipedia.org/w/index.php?title=Convex_Program&oldid=101655719


More articles:

  • Traunviertel
  • Prospect Kokneses
  • Gunnavan Ronnie
  • Kolupaev, Yakov Vasilievich
  • Asian Team Chess Championship 1991
  • Gaviller, Sylvia
  • Akylbay, Serik Baiseituli
  • Ramensky, Alexey Pakhomovich
  • Diego Corrales - Jose Luis Castillo
  • Croatian Sign Language

All articles

Clever Geek | 2019