The Peano series is an infinite sum in which the terms are obtained by the sequential use of the operators of integration and matrix multiplication.
The Peano series was proposed in 1888 by Giuseppe Peano [1] to determine the matrix of a system of ordinary differential equations of the normal form [2] . The general theory and properties of matricants for the system of equations of the normal form (START) were developed by F. R. Gantmakher [3] .
In recent years, algorithms based on the application of the Peano series have been widely used to solve applied problems [4] . In connection with the development of computing technology, it became possible to implement such algorithms not only in analytical, but also in numerical and in analytical form.
Definition
The system of linear differential equations with variable coefficients of the normal form (START):
{\ displaystyle \ int Y'dx = AY + F} ,
Where {\ displaystyle Y} - vector of unknown functions, {\ displaystyle A} - coefficient matrix {\ displaystyle F} - vector of specified functions (load vector).
{\ displaystyle Y = {\ left \ {{{y_ {i}} \ left (t \ right)} \ right \} ^ {T}}; A = \ left [{{a_ {ij}} \ left ( t \ right)} \ right]; F = {\ left \ {{{f_ {i}} \ left (t \ right)} \ right \} ^ {T}}; i = 1,2, \ ldots, n} .
The general solution of a system of differential equations of a normal form is expressed in terms of the matrix of fundamental solutions (matrix matrix):
{\ displaystyle \ Omega (t) = [{\ omega _ {ij}} (t)]} .
{\ displaystyle Y = \ Omega C + {Y_ {F}}} , {\ displaystyle {Y_ {F}} = \ Omega \ int {{\ Omega ^ {- 1}} F}}
J. Peano showed that the matrix matrix {\ displaystyle A} Let's represent in the form of an operator row:
{\ displaystyle \ Omega = [{\ omega _ {ij}}] = E + \ int A \, \; + \ int A \ int A \, \, + \ int A \, \ int A \, \ int A \, + \ ldots} ,
Where {\ displaystyle E} - unit matrix. In this case, the matrix {\ displaystyle A} must be a bounded and integrable matrix function in the interval of variation of the argument under consideration. The series converges absolutely and uniformly in any closed interval in which matrix A is continuous.
The integration operator is an integral with a variable upper limit:
{\ displaystyle \ int {\ left ({\ ldots} \ right)} = \ int _ {t_ {0}} ^ {t} {\ left ({\ ldots} \ right)} \, dt \ ,; { \ begin {array} {* {20} {c}} {} & {\ int \ limits _ {} ^ {2} {\ left ({\ ldots} \ right) = \, \ int {\ int {\ left ({\ ldots} \ right)}} = \ int \ limits _ {t_ {0}} ^ {t} {\ left ({\ int \ limits _ {t_ {o}} ^ {t _ {\ xi} } {\ left ({\ ldots} \ right) d {t _ {\ xi}}}} \ right)}}} \\\ end {array}} \, dt} .
From these expressions it follows that
{\ displaystyle \ Omega \ left ({t__0}} \ right) = [{\ omega _ {ij}} \ left ({t_ {0}} \ right)] = E} .
{\ displaystyle {\ omega _ {ii}} \ left ({t_ {0}} \ right) = \, 1; \ quad \; {\ omega _ {ij}} \ left ({t_ {0}} \ right) = 0 \ ,, \ quad i \ neq j. \ quad C = {Y_ {0}} = {\ left \ {{y_ {0, i}} \ right \} ^ {T}}; \ quad {y_ {0, i}} = {y_ {i}} \ left ({t_ {0}} \ right)} .
Another, physically more convenient, form of representing a general solution is also possible:
{\ displaystyle Y = \ Omega \ cdot ({Y_ {0}} + {U_ {P}}) \,; \ quad {U_ {P}} = \ int {{\ Omega ^ {- 1}} F \ ,.}} .
Here {\ displaystyle Y_ {0}} - vector of initial values, which are set at {\ displaystyle t = t_ {0}} . {\ displaystyle U_ {P}} - vector of external influences that act upon {\ displaystyle t \ geq t_ {0}} . Without breaking community, we can assume that {\ displaystyle t_ {0} = 0} .
Thus, if a variable physically represents time, then the general solution is the solution of the Cauchy problem, and if the variable physically represents the distance, then the general solution is the solution of the boundary problem in the form of the initial parameters method [1].
Area of convergence of the Peano series
The Peano series converges at a given change interval. {\ displaystyle t} absolutely and evenly if the majorant series converges
{\ displaystyle M = 1 + \ mu \ left (t \ right) + {\ frac {n \ mu {{\ left (t \ right)} ^ {2}} {2!} + {\ frac { {n ^ {2}} \ mu {{\ left (t \ right)} ^ {3}} {3!}} + \ ldots + {\ frac {{n ^ {k-1}} \ mu { {\ left (t \ right)} ^ {k}}} {k!}} + \ ldots} ,
{\ displaystyle \ mu \ left (t \ right) = \ mathop {\ max} \ limits _ {i, j} \ int \ limits _ {0} ^ {t} {\ left | {{a_ {ij}} \ left (t \ right)} \ right | dt}} .
Consequently, the convergence of the series is determined by the value of the largest value of the integral of the absolute value of the functions {\ displaystyle a_ {ij}} in a given interval change {\ displaystyle t} .
Applying the Peano series to solving linear differential equations
Linear differential equation with variable coefficients
{\ displaystyle {y ^ {(n)}} + {a_ {n-1}} {y ^ {(n-1)}} + {a_ {n-2}} {y ^ {(n-2) }} + \ ldots + {a_ {1}} y '+ {a_ {0}} y = f (t)}
can be reduced to an equivalent system of equations of the normal form by entering the notation
{\ displaystyle {y_ {i}} = {y ^ {(i-1)}}; {\ begin {array} {* {20} {c}} {} & {i = 1,2, \ ldots, n} \\\ end {array}}} .
Differentiating this equality, we get: {\ displaystyle {y '_ {i}} = {y ^ {(i)}} = {y_ {i + 1}}}
These equalities can be considered as the equations of SNV with {\ displaystyle i-1,2, \ dots, n-1} . The last equation can be obtained from the original equation by moving all terms except {\ displaystyle y ^ {(n)}} , in the right part, writing them in reverse order and expressing the derivatives in terms of the variables with the corresponding number:
{\ displaystyle {\ begin {array} {l} {y ^ {(n)}} = - {a_ {0}} y- {a_ {1}} y '- \ ldots - {a_ {n-2} } {y ^ {(n-2)}} - {a_ {n-1}} {y ^ {(n-1)}} + f (x); \\ {{y '} _ {n}} = - {a_ {0}} {y_ {1}} - {a_ {1}} {y_ {2}} - \ ldots - {a_ {n-2}} {y_ {n-1}} - {a_ {n-1}} {y_ {n}} + f (x) \\\ end {array}}}
Then we obtain an equivalent system of the normal form:
{\ displaystyle Y '= AY + F} .
Matrix {\ displaystyle A} and vector {\ displaystyle F} this system have the form:
{\ displaystyle A = \ left [{\ begin {array} {* {20} {c}} 0 & 1 & 0 & 0 & \ cdots & 0 & 0 \\ 0 & 0 & 1 & 0 & \ cdots & 0 & 0 \\ 0 & 0 & 0 & 1 & \ cdots & 0 & 0 \\. &. &. &. &. &. &. \\. &. &. &. &. &. &. \\ 0 & 0 & 0 & 0 & \ cdots & 0 & 1 \\ {- {a_ {0}}} & {- {a_ {1}}} & {- { a_ {2}}} & {- {a_ {3}}} & \ cdots & {- {a_ {n-2}}} & {- {a_ {n-1}}} \\\ end {array} } \ right]} ; {\ displaystyle F = {\ left \ {{0, \ ldots, 0, f (x)} \ right \} ^ {T}}} .
In vector {\ displaystyle Y} each subsequent element is derived from the previous one. Therefore, each subsequent line in {\ displaystyle \ Omega} , starting from the second, is a derivative of the previous one:
{\ displaystyle {\ omega _ {ij}} = {\ omega '_ {i-1, j}}}
If to designate {\ displaystyle {\ omega _ {1j}} = {\ psi _ {j}}} , the matrix can be represented as:
{\ displaystyle \ Omega = W = \ left [{\ begin {array} {* {20} {c}} {{\ psi _ {1}}, {\ psi _ {2}}, \ ldots, {\ psi _ {n}}} \\ {{{\ psi '} _ {1}}, {{\ psi'} _ {2}}, \ ldots, {{\ psi '} _ {n}}} \ \\ cdots \\ {\ psi _ {1} ^ {(n-1)}, \ psi _ {2} ^ {(n-1)}, \ ldots, \ psi _ {n} ^ {(n- 1)}} \\ end {array}} \ right]}
Thus, the matrix for an equivalent system of the normal form is a Vronsky matrix [1], and the system of fundamental solutions is normalized to zero.
Peano series in solving second-order differential equations
Consider an equation with arbitrary variable coefficients:
{\ displaystyle y '' + {a_ {1}} \ left (t \ right) \, y '+ {a_ {0}} \ left (t \ right) \, y = p \ left (t \ right) } .
This equation reduces to a system of normal form:
{\ displaystyle Y = {\ left \ {{\, y, \, \, y '} \ right \} ^ {T}}} ; {\ displaystyle A = \ left [{\ begin {array} {* {20} {c}} {\; 0} & {\; 1} \\ {- {a_ {0}}} & {- {a_ {1}}} \\ end {array}} \ right]} ; {\ displaystyle F = {\ left \ {{\, 0 \ ,, \, p \ left (t \ right)} \ right \} ^ {T}}} .
If a {\ displaystyle a_ {1} = 0} , the elements of the matrix can be represented as:
{\ displaystyle \ left \ {{\ begin {array} {* {20} {c}} {{\ omega _ {11}} = 1- \ int \ limits _ {} ^ {2} {a_ {0} } + \ int \ limits _ {} ^ {2} {a_ {0}} \ int \ limits _ {} ^ {2} {a_ {0}} - \ int \ limits _ {} ^ {2} {a_ {0}} \ int \ limits _ {} ^ {2} {a_ {0}} \ int \ limits _ {} ^ {2} {a_ {0}} + ...} \\ {{\ omega _ {12}} = x- \ int \ limits _ {} ^ {2} {{a_ {0}} x} + \ int \ limits _ {} ^ {2} {a_ {0}} \ int \ limits _ {} ^ {2} {{a_ {0}} x} - \ int \ limits _ {} ^ {2} {a_ {0}} \ int \ limits _ {} ^ {2} {a_ {0}} \ int \ limits _ {} ^ {2} {a_ {0}} x + ...} \\ {{\ omega _ {21}} = - \ int \ limits _ {} ^ {} {a_ {0} } + \ int \ limits _ {} ^ {} {a_ {0}} \ int \ limits _ {} ^ {2} {a_ {0}} - \ int \ limits _ {} ^ {} {a_ {0 }} \ int \ limits _ {} ^ {2} {a_ {0}} \ int \ limits _ {} ^ {2} {a_ {0}} + \ int \ limits _ {} ^ {} {a_ { 0}} \ int \ limits _ {} ^ {2} {a_ {0}} \ int \ limits _ {} ^ {2} {a_ {0}} \ int \ limits _ {} ^ {2} {{ a_ {0}} -} ...} \\ {{\ omega _ {22}} = 1- \ int \ limits _ {} ^ {} {{a_ {0}} x} + \ int \ limits _ {} ^ {} {a_ {0}} \ int \ limits _ {} ^ {2} {{a_ {0}} x} - \ int \ limits _ {} ^ {} {a_ {0}} \ int \ limits _ {} ^ {2} {a_ {0}} \ int \ limits _ {} ^ {2} {{a_ {0}} x} + ...} \\\ end {array}} \ right .}
If the integrals are taken, then the solution is representable in the form of series in certain functions. As an example of the use of these formulas, consider the equation of oscillations
{\ displaystyle y '' + {\ omega ^ {2}} \, y = 0} , {\ displaystyle {a_ {0}} = - {\ omega ^ {2}}; \ quad {a_ {1}} = 0} .
The elements of the matrix are obtained in the form of the following rows:
{\ displaystyle {\ omega _ {11}} = 1 - {\ frac {{\ omega ^ {2}} \, {t ^ {2}}} {4} + {\ frac {{\ omega ^ { 4}} \, {t ^ {4}}} {24}} - \, \ ldots = \ cos \ omega \, t \,} ;
{\ displaystyle {\ omega _ {12}} = t \; .- {\ frac {{\ omega ^ {2}} {t ^ {3}}} {6} + {\ frac {{\ omega ^ {4}} {t ^ {5}}} {120}} - \, \ ldots = {\ omega ^ {- 1}} \, \ sin \, \ omega \, t} .
The elements of the second row in the matrix are obtained by differentiating the first row:
{\ displaystyle \ Omega = \ left [{\ begin {array} {* {20} {c}} {\ cos \, \ omega t} & {{\ omega ^ {- 1}} \, \ sin \, \ omega t} \\ {- \ omega \, \ sin \, \ omega t} & {\ cos \, \ omega t} \\\ end {array}} \ right] \,} .
Of great practical interest is the solution of the Sturm-Liouville problem [1] for equations of the form:
{\ displaystyle y '' + \, \ lambda \, {{\ bar {a}} _ {0}} y = 0; \ quad {a_ {0}} = - \ lambda \, {{\ bar {a }} _ {0}}} .
In this case, the elements of the series will be multiplied by the corresponding degree of the number {\ displaystyle \ lambda} . For example:
{\ displaystyle {\ omega _ {12}} = x- \ lambda \ int \ limits _ {} ^ {2} {{a_ {0}} x} + {\ lambda ^ {2}} \ int \ limits _ {} ^ {2} {a_ {0}} \ int \ limits _ {} ^ {2} {{a_ {0}} x} - {\ lambda ^ {3}} \ int \ limits _ {} ^ { 2} {a_ {0}} \ int \ limits _ {} ^ {2} {a_ {0}} \ int \ limits _ {} ^ {2} {{a_ {0}} x + \ ldots}}
{\ displaystyle {\ omega _ {21}} = - \ lambda \ int \ limits _ {} ^ {} {a_ {0}} + {\ lambda ^ {2}} \ int \ limits _ {} ^ {} {a_ {0}} \ int \ limits _ {} ^ {2} {a_ {0}} - {\ lambda ^ {3}} \ int \ limits _ {} ^ {} {a_ {0}} \ int \ limits _ {} ^ {2} {a_ {0}} \ int \ limits _ {} ^ {2} {{a_ {0}} + {\ lambda ^ {4}} \ int \ limits _ {} ^ {} {a_ {0}} \ int \ limits _ {} ^ {2} {a_ {0}} \ int \ limits _ {} ^ {2} {{a_ {0}} \ int \ limits _ {} ^ {2} {a_ {0}} - \ ldots} \ ldots}}
When the boundary conditions at the edges of the gap change, the argument of these formulas allow you to make a polynomial whose roots give the whole spectrum of eigenvalues [4].
Numerical implementation of the algorithm
In those cases when the integrals are not taken or too complex and cumbersome expressions are obtained, a numerical algorithm for solving the problem is possible. The interval of change of the argument is divided by a set of nodes into sufficiently small equal intervals. All functions involved in solving the problem are specified by a set of values at the nodes of the grid. Each function has its own vector of values in the grid nodes. All integrals are calculated numerically, for example, using the trapezoid method.
Solution of applied problems
Algorithms based on the Peano series are applied in solving problems of statics, dynamics and stability for rods, plates and shells with variable parameters. When calculating two-dimensional systems, methods of decreasing the dimension are applied. When calculating shells of revolution, the parameters of the shell and the load in the circumferential direction are described by trigonometric series. A system of equations of a normal form is compiled for each harmonic describing a change in the properties of the shell, forces and strains in the longitudinal direction, and a general solution is obtained for the boundary value problem. This part of the problem is usually solved numerically. Then, using compatibility conditions, these harmonics are combined, and the stress-strain state of the shell is obtained, changing in the longitudinal and circumferential direction.