Clever Geek Handbook
📜 ⬆️ ⬇️

Matrix rate

The norm of a matrix is the norm in the linear space of matrices, usually related in some way to the corresponding vector norm (consistentor subordinate ).

Content

  • 1 Definition
  • 2 Operator Standards
    • 2.1 Examples of operator norms
  • 3 Non-operator matrix norms
    • 3.1 Example non-operator norm
  • 4 Examples of norms
    • 4.1 Norm L p, q
      • 4.1.1 Vectorp {\ displaystyle p} p -norm
      • 4.1.2 Frobenius norm
      • 4.1.3 Module maximum
    • 4.2 Norm Shatten
  • 5 Consistency of matrix and vector norms
  • 6 Equivalence of norms
  • 7 Application
  • 8 See also
  • 9 notes
  • 10 Literature
  • 11 Links

Definition

Let K be the main field (usually K = R or K = C ) andKm×n {\ displaystyle K ^ {m \ times n}}   Is the linear space of all matrices with m rows and n columns consisting of elements of K. The norm is given on the matrix space if each matrixA∈Km×n {\ displaystyle A \ in K ^ {m \ times n}}   associates a non-negative real number‖A‖ {\ displaystyle \ | A \ |}   called its norm, so that

  • ‖A‖>0{\ displaystyle \ | A \ |> 0}   , ifA≠0 {\ displaystyle A \ neq 0}   , and‖A‖=0 {\ displaystyle \ | A \ | = 0}   , ifA=0 {\ displaystyle A = 0}   .
  • ‖A+B‖≤‖A‖+‖B‖,A,B∈Km×n{\ displaystyle \ | A + B \ | \ leq \ | A \ | + \ | B \ |, \ quad A, B \ in K ^ {m \ times n}}   .
  • ‖αA‖=|α|‖A‖,α∈K,A∈Km×n{\ displaystyle \ | \ alpha A \ | = | \ alpha | \ | A \ |, \ quad \ alpha \ in K, \ quad A \ in K ^ {m \ times n}}   [1] .

In the case of square matrices (i.e. m = n ), the matrices can be multiplied without leaving the space, and therefore the norms in these spaces usually also satisfy the submultiplicative property

  • ‖AB‖≤‖A‖‖B‖{\ displaystyle \ | AB \ | \ leq \ | A \ | \ | | B \ |}   for all matrices A and B inKn×n {\ displaystyle K ^ {n \ times n}}   .

Submultiplicativity can also be performed for the norms of non-square matrices, but determined for several necessary sizes at once. Namely, if A is a ℓ × m matrix, and B is an m × n matrix, then A B is a ℓ × n matrix.

Operator Norms

An important class of matrix norms is operator norms , also referred to as subordinate or induced . The operator norm is uniquely constructed according to the two norms defined inKn {\ displaystyle K ^ {n}}   andKm {\ displaystyle K ^ {m}}   based on the fact that every m × n matrix is ​​represented by a linear operator fromKn {\ displaystyle K ^ {n}}   atKm {\ displaystyle K ^ {m}}   . Specifically,

‖A‖=sup{‖Ax‖:x∈Kn,‖x‖=one}=sup{‖Ax‖‖x‖:x∈Kn,x≠0}.{\ displaystyle {\ begin {aligned} \ | A \ | & = \ sup \ {\ | Ax \ |: x \ in K ^ {n}, \ \ | x \ | = 1 \} \\ & = \ sup \ left \ {{\ frac {\ | Ax \ |} {\ | x \ |}}: x \ in K ^ {n}, \ x \ neq 0 \ right \}. \ end {aligned}}}   [2]

Provided that the norms on the vector spaces are coordinated, such a norm is submultiplicative (see above ).

Examples of operator norms

  • Matrix norm‖A‖one=maxone≤j≤n∑i=onem|aij| {\ displaystyle \ | A \ | _ {1} = \ max \ limits _ {1 \ leq j \ leq n} \ sum _ {i = 1} ^ {m} | a_ {ij} |}   subordinate to the vector norm‖x‖one=∑i=onen|xi| {\ displaystyle \ | x \ | _ {1} = \ sum _ {i = 1} ^ {n} | x_ {i} |}   .
  • Matrix norm‖A‖∞=maxone≤i≤m∑j=onen|aij| {\ displaystyle \ | A \ | _ {\ infty} = \ max \ limits _ {1 \ leq i \ leq m} \ sum _ {j = 1} ^ {n} | a_ {ij} |}   subordinate to the vector norm‖x‖∞=maxone≤i≤n|xi| {\ displaystyle \ | x \ | _ {\ infty} = \ max \ limits _ {1 \ leq i \ leq n} | x_ {i} |}   .
  • Spectral norm‖A‖2=sup‖x‖2=one‖Ax‖2=sup(x,x)=one(Ax,Ax)=λmax(A∗A) {\ displaystyle \ | A \ | _ {2} = \ sup \ limits _ {\ | x \ | _ {2} = 1} \ | Ax \ | _ {2} = \ sup \ limits _ {(x, x) = 1} {\ sqrt {(Ax, Ax)}} = {\ sqrt {\ lambda _ {max} (A ^ {*} A)}}}   subordinate to the vector norm‖x‖2=∑i=onen|xi|2 {\ displaystyle \ | x \ | _ {2} = {\ sqrt {\ sum _ {i = 1} ^ {n} | x_ {i} | ^ {2}}}}   .

Properties of the spectral norm:

  1. The spectral norm of an operator is equal to the maximum singular number of this operator.
  2. The spectral norm of a normal operator is equal to the absolute value of the maximum modulo eigenvalue of this operator.
  3. The spectral norm does not change when the matrix is ​​multiplied by the orthogonal ( unitary ) matrix.

Non-operator matrix norms

There are norms of matrices that are not operator. The concept of non-operator matrix norms was introduced by Yu. I. Lyubich [3] and investigated by G. R. Belitsky .

Non-Operative Rate Example

For example, consider two different operator norms‖A‖one {\ displaystyle \ | A \ | _ {1}}   and‖A‖2 {\ displaystyle \ | A \ | _ {2}}   e.g. row and column norms. Form a new norm‖A‖=max(‖A‖one,‖A‖2) {\ displaystyle \ | A \ | = \ max {(\ | A \ | _ {1}, \ | A \ | _ {2})}}   . The new norm has a ring property‖AB‖⩽‖A‖‖B‖ {\ displaystyle \ | AB \ | \ leqslant \ | A \ | \ | B \ |}   preserves unit‖I‖=one {\ displaystyle \ | I \ | = 1}   and is not an operator [4] .

Norm Examples

Norm L p, q

Let be(aone,...,an) {\ displaystyle (a_ {1}, \ ldots, a_ {n})}   Is a vector from the columns of the matrixA. {\ displaystyle A.}   NormL2,one {\ displaystyle L_ {2,1}}   by definition, it is equal to the sum of the Euclidean norms of the columns of the matrix:

‖A‖2,one=∑j=onen‖aj‖2=∑j=onen(∑i=onem|aij|2)one/2{\ displaystyle \ Vert A \ Vert _ {2,1} = \ sum _ {j = 1} ^ {n} \ Vert a_ {j} \ Vert _ {2} = \ sum _ {j = 1} ^ { n} \ left (\ sum _ {i = 1} ^ {m} | a_ {ij} | ^ {2} \ right) ^ {1/2}}  

NormL2,one {\ displaystyle L_ {2,1}}   can be generalized to normalLp,q,p,q⩾one: {\ displaystyle L_ {p, q}, \; p, q \ geqslant 1:}  

‖A‖p,q=(∑j=onen(∑i=onem|aij|p)q/p)one/q{\ displaystyle \ Vert A \ Vert _ {p, q} = \ left (\ sum _ {j = 1} ^ {n} \ left (\ sum _ {i = 1} ^ {m} | a_ {ij} | ^ {p} \ right) ^ {q / p} \ right) ^ {1 / q}}  

Vectorp {\ displaystyle p}   Norma

Can be consideredm×n {\ displaystyle m \ times n}   matrix as a size vectormn {\ displaystyle mn}   and use standard vector norms. For example, from the normLp,q {\ displaystyle L_ {p, q}}   atp=q {\ displaystyle p = q}   it turns out the vector p- norm:

‖A‖p=‖vec(A)‖p=(∑i=onem∑j=onen|aij|p)one/p{\ displaystyle \ | A \ | _ {p} = \ | \ mathrm {vec} (A) \ | _ {p} = \ left (\ sum _ {i = 1} ^ {m} \ sum _ {j = 1} ^ {n} | a_ {ij} | ^ {p} \ right) ^ {1 / p}}  

This norm is different from the induced p- norm.‖A‖p=supx≠0‖Ax‖p‖x‖p {\ displaystyle \ | A \ | _ {p} = \ sup \ limits _ {x \ neq 0} {\ frac {\ | Ax \ | _ {p}} {\ | x \ | _ {p}}} }   and from the p- norm of Schatten (see below), although the same notation is used.

Norm Frobenius

The Frobenius norm , or Euclidean norm, is a special case of the p- norm for p = 2 :‖A‖F=∑i=onem∑j=onenaij2 {\ displaystyle \ | A \ | _ {F} = {\ sqrt {\ sum _ {i = 1} ^ {m} \ sum _ {j = 1} ^ {n} a_ {ij} ^ {2}} }}   .

The Frobenius norm is easily calculated (compared, for example, with the spectral norm). It has the following properties:

  • Coherence :‖Ax‖2≤‖A‖F‖x‖2 {\ displaystyle \ | Ax \ | _ {2} \ leq \ | A \ | _ {F} \ | x \ | _ {2}}   , since by virtue of the Cauchy-Bunyakovsky inequality
‖Ax‖22=∑i=onem|∑j=onenaijxj|2≤∑i=onem(∑j=onen|aij|2∑j=onen|xj|2)=∑j=onen|xj|2‖A‖F2=‖A‖F2‖x‖22.{\ displaystyle \ | Ax \ | _ {2} ^ {2} = \ sum _ {i = 1} ^ {m} \ left | \ sum _ {j = 1} ^ {n} a_ {ij} x_ { j} \ right | ^ {2} \ leq \ sum _ {i = 1} ^ {m} \ left (\ sum _ {j = 1} ^ {n} | a_ {ij} | ^ {2} \ sum _ {j = 1} ^ {n} | x_ {j} | ^ {2} \ right) = \ sum _ {j = 1} ^ {n} | x_ {j} | ^ {2} \ | A \ | _ {F} ^ {2} = \ | A \ | _ {F} ^ {2} \ | x \ | _ {2} ^ {2}.}  
  • Submultiplicativity :‖AB‖F≤‖A‖F‖B‖F {\ displaystyle \ | AB \ | _ {F} \ leq \ | A \ | _ {F} \ | B \ | _ {F}}   , as‖AB‖F2=∑i,j|∑kaikbkj|2≤∑i,j(∑k|aik||bkj|)2≤∑i,j(∑k|aik|2∑k|bkj|2)=∑i,k|aik|2∑k,j|bkj|2=‖A‖F2‖B‖F2 {\ displaystyle \ | AB \ | _ {F} ^ {2} = \ sum _ {i, j} \ left | \ sum _ {k} a_ {ik} b_ {kj} \ right | ^ {2} \ leq \ sum _ {i, j} \ left (\ sum _ {k} | a_ {ik} || b_ {kj} | \ right) ^ {2} \ leq \ sum _ {i, j} \ left ( \ sum _ {k} | a_ {ik} | ^ {2} \ sum _ {k} | b_ {kj} | ^ {2} \ right) = \ sum _ {i, k} | a_ {ik} | ^ {2} \ sum _ {k, j} | b_ {kj} | ^ {2} = \ | A \ | _ {F} ^ {2} \ | B \ | _ {F} ^ {2}}   .
  • ‖A‖F2=tr⁡A∗A = t r ⁡ A A ∗{\ displaystyle \ | A \ | _ {F} ^ {2} = \ mathop {\ rm {tr}} A ^ {*} A = \ mathop {\ rm {tr}} AA ^ {*}}   wheretr⁡A {\ displaystyle \ mathop {\ rm {tr}} A}   - matrix traceA {\ displaystyle A}   ,A∗ {\ displaystyle A ^ {*}}   Is a Hermitian conjugate matrix .
  • ‖A‖F2=ρone+ρ2+⋯+ρn{\ displaystyle \ | A \ | _ {F} ^ {2} = \ rho _ {1} + \ rho _ {2} + \ dots + \ rho _ {n}}   whereρone,ρ2,...,ρn {\ displaystyle \ rho _ {1}, \ rho _ {2}, \ dots, \ rho _ {n}}   - singular matrix numbersA {\ displaystyle A}   .
  • ‖A‖F{\ displaystyle \ | A \ | _ {F}}   does not change upon matrix multiplicationA {\ displaystyle A}   left or right on orthogonal ( unitary ) matrices [5] .

Module Maximum

The modulus maximum norm is another special case of the p- norm for p = ∞ .

‖A‖max=max{|aij|}.{\ displaystyle \ | A \ | _ {\ text {max}} = \ max \ {| a_ {ij} | \}.}  

Norm Schatten

Coherence of matrix and vector norms

Matrix norm‖⋅‖ab {\ displaystyle \ | \ cdot \ | _ {ab}}   onKm×n {\ displaystyle K ^ {m \ times n}}   called consistent with standards‖⋅‖a {\ displaystyle \ | \ cdot \ | _ {a}}   onKn {\ displaystyle K ^ {n}}   and‖⋅‖b {\ displaystyle \ | \ cdot \ | _ {b}}   onKm {\ displaystyle K ^ {m}}   , if:

‖Ax‖b≤‖A‖ab‖x‖a{\ displaystyle \ | Ax \ | _ {b} \ leq \ | A \ | _ {ab} \ | x \ | _ {a}}  

for anyA∈Km×n,x∈Kn {\ displaystyle A \ in K ^ {m \ times n}, x \ in K ^ {n}}   . The operator norm by construction is consistent with the original vector norm.

Examples of consistent, but not subordinate matrix norms:

  • Euclidean norm‖A‖F=∑i=onen∑j=onemaij2 {\ displaystyle \ | A \ | _ {F} = {\ sqrt {\ sum _ {i = 1} ^ {n} \ sum _ {j = 1} ^ {m} a_ {ij} ^ {2}} }}   consistent with vector norm‖x‖2=∑i=onenxi2 {\ displaystyle \ | x \ | _ {2} = {\ sqrt {\ sum _ {i = 1} ^ {n} x_ {i} ^ {2}}}}   [5] .
  • Norm‖A‖=∑i,j=onen|aij| {\ displaystyle \ | A \ | = \ sum _ {i, j = 1} ^ {n} | a_ {ij} |}   consistent with vector norm‖x‖one=∑i=onen|xi| {\ displaystyle \ | x \ | _ {1} = \ sum _ {i = 1} ^ {n} | x_ {i} |}   [6] .

Equivalence of norms

All norms in spaceKm×n {\ displaystyle K ^ {m \ times n}}   are equivalent, that is, for any two norms‖.‖α {\ displaystyle \ |. \ | _ {\ alpha}}   and‖.‖β {\ displaystyle \ |. \ | _ {\ beta}}   and for any matrixA∈Km×n {\ displaystyle A \ in K ^ {m \ times n}}   double inequality is true:

Cone‖A‖α≤‖A‖β≤C2‖A‖α,{\ displaystyle C_ {1} \ | A \ | _ {\ alpha} \ leq \ | A \ | _ {\ beta} \ leq C_ {2} \ | A \ | _ {\ alpha},}  

where are the constantsCone {\ displaystyle C_ {1}}   andC2 {\ displaystyle C_ {2}}   independent of matrixA {\ displaystyle A}   .

ForA∈Rm×n {\ displaystyle A \ in \ mathbb {R} ^ {m \ times n}}   inequalities are true:

  • ‖A‖2≤‖A‖F≤n‖A‖2{\ displaystyle \ | A \ | _ {2} \ leq \ | A \ | _ {F} \ leq {\ sqrt {n}} \ | A \ | _ {2}}   ,
  • ‖A‖max≤‖A‖2≤mn‖A‖max{\ displaystyle \ | A \ | _ {\ text {max}} \ leq \ | A \ | _ {2} \ leq {\ sqrt {mn}} \ | A \ | _ {\ text {max}}}   ,
  • onen‖A‖∞≤‖A‖2≤m‖A‖∞{\ displaystyle {\ frac {1} {\ sqrt {n}}} \ | A \ | _ {\ infty} \ leq \ | A \ | _ {2} \ leq {\ sqrt {m}} \ | A \ | _ {\ infty}}   ,
  • onem‖A‖one≤‖A‖2≤n‖A‖one{\ displaystyle {\ frac {1} {\ sqrt {m}}} \ | A \ | _ {1} \ leq \ | A \ | _ {2} \ leq {\ sqrt {n}} \ | A \ | _ {1}}   ,

Where‖A‖one {\ displaystyle \ | A \ | _ {1}}   ,‖A‖2 {\ displaystyle \ | A \ | _ {2}}   and‖A‖∞ {\ displaystyle \ | A \ | _ {\ infty}}   - operator norms [7] .

Application

Matrix norms are often used in the analysis of computational methods of linear algebra. For example, a program for solving systems of linear algebraic equations may give an inaccurate result if the coefficient matrix is poorly conditioned (“almost degenerate ”). To quantify the proximity to degeneracy, one must be able to measure the distance in the matrix space. This possibility is provided by matrix norms [8] .

See also

  • Norma (math)
  • Normalized space

Notes

  1. ↑ Gantmakher, 1988 , p. 410.
  2. ↑ Prasolov, 1996 , p. 210.
  3. ↑ Lyubich Yu. I. On the operator norms of matrices // Uspekhi Mat . - 1963. - N. 18. Issue. 4 (112) - S. 161-164. - URL: http://mi.mathnet.ru/rus/umn/v18/i4/p161
  4. ↑ Belitsky, 1984 , p. 99.
  5. ↑ 1 2 Ilyin, Kim, 1998 , p. 311.
  6. ↑ Bellman, 1969 , p. 196.
  7. ↑ Golub, Van Lone, 1999 , p. 63.
  8. ↑ Golub, Van Lone, 1999 , p. 61.

Literature

  • Ilyin V.A. , Kim G. D. Linear algebra and analytic geometry. - M .: Publishing house Mosk. University, 1998 .-- 320 p. - ISBN 5-211-03814-2 .
  • Gantmakher F.R. Matrix Theory. - M .: Science, 1988.
  • Bellman R. Introduction to matrix theory. - M .: Science, 1969.
  • Prasolov V.V. Problems and theorems of linear algebra. - M .: Nauka, 1996 .-- 304 p. - ISBN 5-02-014727-3 .
  • Golub J., Van Lone C. Matrix calculations: Per. from English .. - M .: Mir, 1999 .-- 548 p. - ISBN 5-03-002406-9 .
  • Belitsky G.R. , Lubich Yu. I. Norms of matrices and their applications. - Kiev: Naukova Dumka, 1984. - 160 p.

Links

  • Exponenta.ru. Educational mathematical portal (neopr.) . Date of treatment November 15, 2016.
  • World of Mathematics. Matrix norm (neopr.) . Date of treatment December 3, 2016.
Source - https://ru.wikipedia.org/w/index.php?title=Matrix_Rates&oldid=102069851


More articles:

  • Ministry of Administration and Implementation of Digital Technologies
  • Aramis (Ethiopia)
  • Isernor
  • Khalilov, Kurban Ali oglu
  • Holy Trinity Church (Oslo)
  • Bakhshaliev, Rza Ibrahim oglu
  • Big Zapan
  • Shekhminsky rural settlement
  • Dokken Don
  • Spivak, Rita Solomonovna

All articles

Clever Geek | 2019