Clever Geek Handbook
📜 ⬆️ ⬇️

Conjugate gradient method (for solving SLAE)

Comparison of gradient descent methods (green) and the conjugate gradient method forn=2 {\ displaystyle n = 2} n = 2

The conjugate gradient method, a numerical method for solving systems of linear algebraic equations , is an iterative method of the Krylov type .

Content

Statement of the Problem

Let a system of linear equations be given:Ax=b {\ displaystyle Ax = b}   . Moreover, the matrix of the system is a real matrix with the following properties:A=AT>0 {\ displaystyle A = A ^ {T}> 0}   , that is, it is a symmetric positive definite matrix . Then the process of solving SLAE can be represented as minimizing the following functionality:

(Ax,x)-2(b,x)→min{\ displaystyle (Ax, x) -2 (b, x) \ rightarrow min}  

Behind(⋅,⋅) {\ displaystyle (\ cdot, \ cdot)}   the scalar product is indicated. By minimizing this functional, using the Krylov subspaces , we obtain the conjugate gradient method algorithm.

Algorithm

Preparing for the iterative process
  1. We choose the initial approximationx0 {\ displaystyle x ^ {0}}  
  2. r0=b-Ax0{\ displaystyle r ^ {0} = b-Ax ^ {0}}  
  3. z0=r0{\ displaystyle z ^ {0} = r ^ {0}}  
k{\ displaystyle k}   th iteration of the method [1]
  1. αk=(rk-one,rk-one)(Azk-one,zk-one){\ displaystyle \ alpha _ {k} = {\ frac {(r ^ {k-1}, r ^ {k-1})} {(Az ^ {k-1}, z ^ {k-1}) }}}  
  2. xk=xk-one+αkzk-one{\ displaystyle x ^ {k} = x ^ {k-1} + \ alpha _ {k} z ^ {k-1}}  
  3. rk=rk-one-αkAzk-one{\ displaystyle r ^ {k} = r ^ {k-1} - \ alpha _ {k} Az ^ ​​{k-1}}  
  4. βk=(rk,rk)(rk-one,rk-one){\ displaystyle \ beta _ {k} = {\ frac {(r ^ {k}, r ^ {k})} {(r ^ {k-1}, r ^ {k-1})}}}  
  5. zk=rk+βkzk-one{\ displaystyle z ^ {k} = r ^ {k} + \ beta _ {k} z ^ {k-1}}  
Stop criterion

Since the minimized functional is quadratic, the process should give an answer ton {\ displaystyle n}   -th iteration, however, when implementing the method on a computer, there is an error in representing real numbers, as a result of which more iterations may be required. The accuracy can also be estimated by the relative discrepancy||rk||||b|| {\ displaystyle {\ frac {|| r ^ {k} ||} {|| b ||}}}   , and stop the iterative process when the residual becomes less than a given number.

Algorithm for a preconditioned system

Let the preconditioned system have the form:SAQx=Sb {\ displaystyle SAQx = Sb}   , then the algorithm of the method for such a system can be written as follows:

Preparing for the iterative process
  1. We choose the initial approximationx0 {\ displaystyle x ^ {0}}  
  2. r0=Q(b-SAx0){\ displaystyle r ^ {0} = Q (b-SAx ^ {0})}  
  3. z0=r0{\ displaystyle z ^ {0} = r ^ {0}}  
  4. x~0=Qx0{\ displaystyle {\ widetilde {x}} ^ {0} = Qx ^ {0}}  
k{\ displaystyle k}   method iteration
  1. αk=(rk-one,rk-one)(SAQzk-one,zk-one){\ displaystyle \ alpha _ {k} = {\ frac {(r ^ {k-1}, r ^ {k-1})} {(SAQz ^ {k-1}, z ^ {k-1}) }}}  
  2. x~k=x~k-one+αkzk-one{\ displaystyle {\ widetilde {x}} ^ {k} = {\ widetilde {x}} ^ {k-1} + \ alpha _ {k} z ^ {k-1}}  
  3. rk=rk-one-αkSAQzk-one{\ displaystyle r ^ {k} = r ^ {k-1} - \ alpha _ {k} SAQz ^ {k-1}}  
  4. βk=(rk,rk)(rk-one,rk-one){\ displaystyle \ beta _ {k} = {\ frac {(r ^ {k}, r ^ {k})} {(r ^ {k-1}, r ^ {k-1})}}}  
  5. zk=rk+βkzk-one{\ displaystyle z ^ {k} = r ^ {k} + \ beta _ {k} z ^ {k-1}}  
After the iterative process
  1. x∗=Q-onex~m{\ displaystyle x ^ {*} = Q ^ {- 1} {\ widetilde {x}} ^ {m}}   wherex∗ {\ displaystyle x ^ {*}}   - an approximate solution to the system,m {\ displaystyle m}   Is the total number of iterations of the method.
Stop criterion

In this case, you can use the same stopping criteria as for a conventional system, only taking into account preconditioning. For example, the relative discrepancy will be calculated as:||r~k||||Sb|| {\ displaystyle {\ frac {|| {\ widetilde {r}} ^ {k} ||} {|| Sb ||}}}   , however, you can use the residual of the original system, which is calculated as follows:||rk||||b||=||Q-oner~k||||b|| {\ displaystyle {\ frac {|| r ^ {k} ||} {|| b ||}} = {\ frac {|| Q ^ {- 1} {\ widetilde {r}} ^ {k} | |} {|| b ||}}}  

Features and Generalizations

Like all methods on Krylov subspaces, the conjugate gradient method only requires the matrix to be able to multiply it by a vector, which makes it possible to use special matrix storage formats (such as sparse) and save memory on matrix storage.
The method is often used to solve finite element SLAEs.
The method has generalizations: the bis conjugate gradient method for working with asymmetric matrices. And the complex conjugate gradient method, where the matrixA {\ displaystyle A}   may contain complex numbers, but must satisfy the condition:A=A∗>0 {\ displaystyle A = A ^ {*}> 0}   , that is, to be a self-adjoint positive definite matrix.

Notes

  1. ↑ Henk A. van der Vorst. Iterative Krylov Methods for Large Linear System. - Cambridge University Press, 2003 .-- 221 p. - ISBN 9780521818285 .
Source - https://ru.wikipedia.org/w/index.php?title=Conjugate_Gradient_Method_(for_SLAU_solution )&oldid = 84381316


More articles:

  • Frauenfeld Alfred
  • Fonvan
  • Halpern, Jeff
  • Languages ​​of Côte d'Ivoire
  • Billboard Music Awards 2013
  • Hao (monetary unit)
  • Petrozavodsk (submarine)
  • Saghatelyan, Mark Gevorkovich
  • Rimando, Nick
  • Simpson, Sean

All articles

Clever Geek | 2019