Clever Geek Handbook
📜 ⬆️ ⬇️

Equation solution


In mathematics, solving an equation is the task of finding such values ​​of the arguments ( numbers , functions , sets , etc.) for which equality holds (the expressions to the left and right of the equal sign become equivalent ). The values ​​of the unknown variables at which this equality is achieved are called the solutions or roots of this equation. To solve an equation means to find the set of all its solutions (roots) or to prove that there are no roots at all (or there are no those that satisfy the given conditions).

For example, the equationx+y=2x-one {\ displaystyle x + y = 2x-1} {\ displaystyle x + y = 2x-1} solved for the unknownx {\ displaystyle x} x using replacementx=y+one, {\ displaystyle x = y + 1,} {\ displaystyle x = y + 1,} since the change of variablex {\ displaystyle x} x on expressiony+one {\ displaystyle y + 1} {\ displaystyle y + 1} turns the equation into an identity :(y+one)+y=2(y+one)-one. {\ displaystyle (y + 1) + y = 2 (y + 1) -1.} {\ displaystyle (y + 1) + y = 2 (y + 1) -1.} Also, if you put an unknown variabley, {\ displaystyle y,} {\ displaystyle y,} then the equation is solved by replacingy=x-one {\ displaystyle y = x-1} {\ displaystyle y = x-1} . Variable replacementy {\ displaystyle y} y on expressionx-one {\ displaystyle x-1} {\ displaystyle x-1} turns the equation into an identity:x+(x-one)=2x-one. {\ displaystyle x + (x-1) = 2x-1.} {\ displaystyle x + (x-1) = 2x-1.} Alsox {\ displaystyle x} x andy {\ displaystyle y} y can be simultaneously considered as unknown variables. There are many solutions to the equation for this case, for example,(x,y)=(one,0) {\ displaystyle (x, y) = (1,0)} {\ displaystyle (x, y) = (1,0)} - i.ex=one {\ displaystyle x = 1} x = 1 andy=0, {\ displaystyle y = 0,} {\ displaystyle y = 0,} but in general(x,y)=(a+one,a) {\ displaystyle (x, y) = (a + 1, {\ text {}} a)} {\ displaystyle (x, y) = (a + 1, {\ text {}} a)} for all possible values.

Depending on the task, it may be necessary to find one solution (any suitable solution) or all solutions of the equation. All solutions of the equation are called the set of solutions . In addition to simply finding a solution, the problem can be posed of finding the best solution of the equation for any parameter. Tasks of this kind are called optimization problems . Solutions to optimization problems, as a rule, are not called “solutions of an equation”.

Content

  • 1 Analytical methods for solving the equation
    • 1.1 Method of selection of value
    • 1.2 Full search
    • 1.3 inverse operation method
    • 1.4 Graphic Method
    • 1.5 Method for assessing DLD
    • 1.6 Factorization Method
    • 1.7 Transformation Methods
      • 1.7.1 Transfer of terms
      • 1.7.2 Addition (subtraction) of constants (expressions)
      • 1.7.3 Multiplication (division) by a nonzero constant (expression)
      • 1.7.4 Replacing Expressions
      • 1.7.5 Power
      • 1.7.6 Logarithm
      • 1.7.7 Potentiation
      • 1.7.8 Tetration with indicator 2
      • 1.7.9 Superpotentiation
    • 1.8 Special solution methods
      • 1.8.1 Transformations of trigonometric equations
      • 1.8.2 Transformations of differential and integral equations
      • 1.8.3 Transformations of functional equations
  • 2 Numerical methods for solving equations
    • 2.1 Method of bisection (dichotomy)
    • 2.2 Method of chords (secants)
    • 2.3 Newton's method
    • 2.4 Simple iteration method
  • 3 Solution Verification Methods
  • 4 Methods of screening extraneous roots
  • 5 Criteria for the existence of admissible solutions of equations
  • 6 notes
  • 7 Literature

Analytical Methods for Solving an Equation

By a method for solving a problem (including equations) is meant, first of all, a step-by-step algorithm.

The analytical solution method (otherwise, just an analytical solution ) is a closed-form expression that can be calculated in a finite number of operations [1] . However, there are formulas (expressions) that contain non-computable (or unrepresentable) functions at this stage in the development of theory and technology. Further, by an analytical solution we will mean any solution written in a formula form that contains known or certain functions of parameters (in the case of numerical equations) or variables (in the case of functional equations ). Below are the main analytical methods for solving various types of equations.

Value Selection Method

The simplest illogical (since it does not require any obedience to the laws of mathematical logic ) method of solving the equation, which consists in guessing the correct root value. With this method, training begins to solve more complex equations than linear (eg, square and cubic ), in the 5th – 7th grades of a secondary school in Russia.

An example of solving the equation by the selection method:x2-2x+one=0 {\ displaystyle x ^ {2} -2x + 1 = 0}  

It is easy to guess that one of the roots of the equation will beone. {\ displaystyle 1.}   To check the correctness of the selected value, it is necessary to substitute it in the original equation instead of the variablex:one2-2⋅one+one=0⟺0=0. {\ displaystyle x: {\ text {}} 1 ^ {2} -2 \ cdot 1 + 1 = 0 \ Longleftrightarrow 0 = 0.}   .

As you can see, the required identity equality holds, which means that the value we found is correct (that is, it is included in the set of solutions of the equation).

The disadvantages of the selection method:

  • Most often, the roots of the equation are irrational (algebraically irrational or even transcendental ) numbers, which are almost impossible to guess;
  • The selection method cannot indicate the absence of a solution under any restrictions on the value of the solutions;
  • In the case of an infinite number of solutions (for example, in equations with two or more variables), this method is completely unsuitable, however, it can be useful when using the selected correct value using some other known method, you can get the remaining valid solutions [2] ;
  • Not all equations are presented as simple functions of a variable, so the selection method is also incapable of solving such equations;
  • The applicability of this method is limited not only by the complexity of the equations, their type and the range of feasible solutions, but also by the presence of good computational abilities, knowledge of the most common values ​​and their correspondence to a particular type of equations [3] .

Advantages of the selection method:

  • Ease of use (the use of the selection method does not require the implementation of virtually any logical action, except for verification);
  • The speed of obtaining a solution (usually, where the necessity of applying the selection method is indicated in the context, decisions are selected quite quickly);
  • Availability in use (after all, sometimes it happens that the analytical solution of some kind of equation is completely absent, but the value is still easy to select in a particular case, for example, the equation2x-x2-one=0 {\ displaystyle 2 ^ {x} -x ^ {2} -1 = 0}   it is still impossible to solve analytically [4] , but, nevertheless, getting at least one root by the selection method is quite simple:x=one⟶2one-one2-one=0⟺0=0). {\ displaystyle x = 1 \ longrightarrow 2 ^ {1} -1 ^ {2} -1 = 0 \ Longleftrightarrow 0 = 0).}  

Full search

A special case of the selection method is the exhaustive search method - that is, finding a solution by exhausting all possible options. It is used if the set of all solutions (or all solutions satisfying certain conditions) is finite.

Inverse Operation Method

This method of solving equations, otherwise called the method of constructing the inverse function , is based on the property of the inverse function to level the influence of the function on the value of the variable [5] :

f-one(f(x))=x{\ displaystyle f ^ {- 1} (f (x)) = x}   or, essentially the same thing,f(f-one(x))=x. {\ displaystyle f (f ^ {- 1} (x)) = x.}  

The method is usually used as part of other decision methods and is independently used only when the variables and constants are on opposite sides of the equal sign:f(x)=g(a0,aone,...ai),ai=const. {\ displaystyle f (x) = g (a_ {0}, a_ {1}, ... a_ {i}), a_ {i} = const.}  

The simplest example is a linear equation:5x=10. {\ displaystyle 5x = 10.}   Heref(x)=5x,g(ai)=10, {\ displaystyle f (x) = 5x, {\ text {}} g (a_ {i}) = 10,}   meanf-one(x)=x5, {\ displaystyle f ^ {- 1} (x) = {\ frac {x} {5}},}   and get:f-one(f(x))=5x5=x, {\ displaystyle f ^ {- 1} (f (x)) = {\ frac {5x} {5}} = x,}   now the same thing needs to be done with the other part of the equation:f-one(g(ai))=105=2, {\ displaystyle f ^ {- 1} (g (a_ {i})) = {\ frac {10} {5}} = 2,}   from herex=2. {\ displaystyle x = 2.}   Verification:5⋅2=10⟺10=10. {\ displaystyle 5 \ cdot 2 = 10 \ Longleftrightarrow 10 = 10.}  

Another example:x2=four⟺x=±four⟺xone,2=2;-2. {\ displaystyle x ^ {2} = 4 \ Longleftrightarrow x = \ pm {\ sqrt {4}} \ Longleftrightarrow x_ {1,2} = 2; -2.}  

Disadvantages of the reverse operation method:

  • Sometimes the inverse function of a variable as part of other solution methods leads to several results, because of which extraneous roots appear in the solution, which were obtained in a logical way, but do not fit into the equation (violate the identity) [6] [7] , which turns out only during verification;
  • Inverse operations, most often, seem much more complicated than ordinary ones (for example, primary school children perceive division as a more complex action than multiplication "familiar" to them; high school students often adapt for a long time because differentiation is also very common for them, perceived easier);
  • Cases are rare, but the same happens when one or another operation is inverse to itself (for example, as a linear functionf(x)=x, {\ displaystyle f (x) = x,}   or integral and derivative of exponential functionf(x)=ex {\ displaystyle f (x) = e ^ {x}}   [8] );
  • Not all inverse functions are representable as compositions of other known functions (most often, these are integrals - Fresnel integrals , Laplace function , integral sine and cosine , integral exponential , or, for example, non-elementary ones, such as Lambert W-function , tetration and superroot ) ;
  • Not every inverse operation gives a valid or even at least some solution (for example, a functionf(x)=x2 {\ displaystyle f (x) = x ^ {2}}   gives a real number for any value of the variable, however, this value is always non-negative [9] , which is why finding the inverse function is limited by the non-negativity of the argument; also, for example, there are non-integrable [10] or non-differentiable [11] functions, like the Dirichlet function , the Weierstrass function , etc.);
  • For some inverse operations, there is still no calculation algorithm, so the values ​​of these functions, as solutions to some equations, remain in the form of formulas (for example, the superlogarithm , the Riemann ζ-function , etc.).

Advantages of the inverse method:

  • In contrast to the selection method, the use of inverse functions, most often, allows not to miss the additional existing feasible solutions, even if their set is infinite;
  • Inverse operations are one of the main components of almost any logical methods for solving equations, they are used much more often and, by their example, help to better understand the concepts of the range of permissible values , the scope of definition and the scope of change of the value of the argument (s) ;
  • In most cases, the values ​​of the inverse functions can be calculated using various kinds of calculators or, conversely, left in the formula expression for convenience in future use.

Graphical Method

This method of solving problems (including equations) is based on the basic property of function graphs - a specific and (ideally) exact mapping of the values ​​of the arguments and the values ​​of the functions of these arguments in the coordinate space , as a result of which each point of the graph has at most one set of these values for each specific function (that is, two values ​​from the same argument cannot be assigned to the same coordinate point).

By definition, two functions have one common point (the point of intersection of the graphs) when their values ​​from one (them) and that (those) same value (s) of the argument (s) are equal:f(xone,x2,...,xi)=g(xone,x2,...,xi). {\ displaystyle f (x_ {1}, x_ {2}, ..., x_ {i}) = g (x_ {1}, x_ {2}, ..., x_ {i}).}  

For example, we solve graphically the equationone2x2-fourx+10=x-2 {\ displaystyle {\ frac {1} {2}} x ^ {2} -4x + 10 = x-2}   (see picture below):

 
Example of intersection points (A and B).

The function graph is shown in black here.f(x)=one2x2-fourx+10, {\ displaystyle f (x) = {\ frac {1} {2}} x ^ {2} -4x + 10,}   in blue - function graphg(x)=x-2. {\ displaystyle g (x) = x-2.}   The abscissas of points A and B form the set of solutions of the original equation:xone=four,x2=6, {\ displaystyle x_ {1} = 4, x_ {2} = 6,}   which is easily projected by points on the abscissa axis (axisx {\ displaystyle x}   ) Verification:one2⋅four2-four⋅four+10=four-2⟺2=2 {\ displaystyle {\ frac {1} {2}} \ cdot 4 ^ {2} -4 \ cdot 4 + 10 = 4-2 \ Longleftrightarrow 2 = 2}   andone2⋅62-four⋅6+10=6-2⟺four=four. {\ displaystyle {\ frac {1} {2}} \ cdot 6 ^ {2} -4 \ cdot 6 + 10 = 6-2 \ Longleftrightarrow 4 = 4.}   The solution is comprehensive, because the line cannot cross the parabola more than two times (according to the main theorem of algebra ).

Disadvantages of the graphical method:

  • Graphically, with the exception of simple cases, you can get only an approximate solution;
  • Not all values ​​and functions are computable; therefore, their graphs cannot be constructed independently;
  • Without knowing the properties of the functions included in the equation, it is impossible to precisely state whether the obtained set of solutions is exhaustive;
  • Most often, the applicability of this method is limited to plotting functions in the vicinity of the center of coordinates;
  • Reproduction of function graphs, as they say, “in the mind” can be quite difficult, in such cases you can’t do without any additional devices.

Advantages of the graphical method:

  • Ease of use (a high school level of knowledge is sufficient);
  • Availability in use (for example, when the solution to the equation has not yet been studied or is absent at all);
  • The visibility of the presentation (helps to better understand what any solution is and how it can be represented) [12] .

In addition to the described method, there are special modified graphical methods, such as, for example, the Lily method .

DLD Assessment Method

The method for assessing DLD (the range of permissible values) is to cut off some of the values ​​from the range of values ​​of the function in which the function does not exist (otherwise, cut off the values ​​that it cannot take).

For example, we solve the following system of equations by the method of estimating DLD:

{onex+one+x+one=2sin2⁡(x)=x{\ displaystyle {\ begin {cases} {\ frac {1} {x + 1}} + x + 1 = 2 \\\ sin ^ {2} (x) = x \ end {cases}}}  

We start with the upper equation, based on the following property of the sum of mutually inverse numbers :onen+n⩾2,n>0. {\ displaystyle {\ frac {1} {n}} + n \ geqslant 2, n> 0.}   It is directly derived from the special case of a non-strict inequality about power means [13] . Moreover, equality to two is achieved only if these numbers are equal:onex+one=x+one⟺(x+one)2=one⟺x+one=±one. {\ displaystyle {\ frac {1} {x + 1}} = x + 1 \ Longleftrightarrow (x + 1) ^ {2} = 1 \ Longleftrightarrow x + 1 = \ pm 1.}   As a result, we get many solutions:xone=0,x2=-2. {\ displaystyle x_ {1} = 0, {\ text {}} x_ {2} = - 2.}  

In the bottom equation there is a non-negative squaring function and a functionf(x)=sin⁡(x), {\ displaystyle f (x) = \ sin (x),}   whose values ​​are in the range{-one;one}. {\ displaystyle \ {- 1; {\ text {}} 1 \}.}  

As you can see, the second solution does not fit both criteria, which eliminates the need for a second check. It remains to check the first root:sin⁡(0)=0⟺sin2⁡(0)=0⟺0=0. {\ displaystyle \ sin (0) = 0 \ Longleftrightarrow \ sin ^ {2} (0) = 0 \ Longleftrightarrow 0 = 0.}   Therefore, the only solution to the original system of equations isxone=0. {\ displaystyle x_ {1} = 0.}  

The disadvantages of the method for assessing DLD:

  • There are functions whose study is extremely difficult, which is why their properties remain unknown for a long time.( {\ displaystyle {\ biggl (}}   for example, the function proposed by Riemannf(x)=∑n=one∞sin⁡(n2x)n2 {\ displaystyle f (x) = \ sum _ {n = 1} ^ {\ infty} {\ frac {\ sin (n ^ {2} x)} {n ^ {2}}}}   [fourteen]); {\ displaystyle {\ biggr)};}  
  • Often the estimation method leads to the interval in which the possible solutions lie, and then they have to be found by the selection method, taking into account the obtained constraint;
  • The evaluation of the values ​​of functions is based on the knowledge of their properties, which, as often happens, due to the diversity of the functions themselves, are not collected together in one source.

Advantages of the DLD assessment method:

  • This method is useful when it is necessary to prove the absence of an acceptable solution, but to do this with other methods is not possible;
  • By the method of estimating DLD, in contrast to the graphical and the selection method, it is possible to obtain an infinite number of feasible solutions;
  • As shown in the example, the competent use of the assessment method avoids additional checks;
  • Some equations are much easier to solve using this method (for example, special cases of irrational equations ).

Factorization Method

The factorization method of equations (that is, their factorization ) is used to represent them in the form of a product of several less complex, more often, similar equations [15] . The expansion is based on the property that the product of several factors is equal to zero if and only if at least one of these factors is also equal to zero [16] .

This method of solving precisely polynomial equations has been a separate direction of algebra for many centuries [17] and is a combination of several algorithms for obtaining a solution at once. Its relevance and significance is a consequence of the basic theorem of algebra , according to which any polynomial of any nonzero finite degree has at least one complex root.

The simplest of all decomposition methods is, perhaps, the division of a polynomial into a polynomial .

The disadvantages of the method of factorization of polynomials:

  • Relatively narrow specialization of the method (e.g. equation2x-3x+2=0 {\ displaystyle 2 ^ {x} -3x + 2 = 0}   cannot be factorized, since it will not work out the product of the root formulas [18] );
  • The decomposition method contains several factorization methods at once and is not applicable to all polynomials, in other words, it is not universal (some irrational equations still cannot be solved either analytically or decomposed too, is a simple example:xπ-x+one=0 {\ displaystyle x ^ {\ pi} -x + 1 = 0}   )

Advantages of the polynomial factorization method:

  • Some special cases of equations for which a general solution algorithm is not found or is too complicated can only be solved by expansion (for example, equations of the sixth and higher degrees, the algorithms of the future solution of which are too cumbersome, difficult and time-consuming, as a result of which their development becomes impractical [19] );
  • All decomposition methods were developed long ago, are available in open sources, do not go beyond the school curriculum, and, aside from an ordinary calculator, as a rule, they do not require additional knowledge and devices (including special software products).

Transformation Methods

Among these methods are sets of actions performed on both sides of the equation (before the equal sign and after), leading to consequence equations or equivalent equations, which are much easier to solve due to the presence of a known algorithm for solving them or presenting them in a more convenient form that allows you to quickly correlate them with one or another known solution algorithm. The following is a list of basic transformations.

Transfer of terms

Any part of the equation can be "moved to the other side, for the equal sign", adding it to the other part of the equation and only changing the sign (!) To the opposite [20] .

For example, we solve in real numbers the equation:x2-2x+four=2x. {\ displaystyle x ^ {2} -2x + 4 = 2x.}  

To do this, transfer the right side of the equation to the left, changing the sign of the right side to the opposite:x2-2x+four-2x=0. {\ displaystyle x ^ {2} -2x + 4-2x = 0.}  

Further, due to the associativity of the function of multiplication by a constant, we add such terms:x2-fourx+four=0. {\ displaystyle x ^ {2} -4x + 4 = 0.}  

Now it’s easy to see that the resulting left side resembles the full square formula:(x-2)2=x2-fourx+four. {\ displaystyle (x-2) ^ {2} = x ^ {2} -4x + 4.}  

From here we find the roots:±(x-2)=0⟶xone=x2=2. {\ displaystyle \ pm (x-2) = 0 \ longrightarrow x_ {1} = x_ {2} = 2.}   Verification:22-2⋅2+four=2⋅2⟺four=four. {\ displaystyle 2 ^ {2} -2 \ cdot 2 + 4 = 2 \ cdot 2 \ Longleftrightarrow 4 = 4.}  

The transfer of terms can be performed in any cases (without taking the argument out from under the function), while the resulting equations are equivalent.

Adding (subtracting) constants (expressions)

This method of transforming equations is based on the property of numerical equality - its invariance with respect to addition (numerical equality will remain so even if we add some number to both its parts, including a negative one). In turn, this property of numerical equality is just a special case of a similar property of numerical non-strict inequalities [21] . Since most of the equations to be solved are performed on the field of any numbers (there are non-numerical equations, for example, functional ones, where functions act as an unknown variable), the same numerical properties also apply to equations.

The essence of the transformation is that the same number or expression with a numerical function can be added to both sides of the equation, the ODZ of which is no more than the ODZ of the functions in the original equation. The transfer of terms is just a special case of adding (subtracting) expressions. In particular, the "mutual destruction" of identical terms on opposite sides of the equal sign is a consequence of the possibility of transfer.

The addition of a numerical expression is always possible, however, leads to an equivalent equation only when the region of the ODZ of the function in the expression is not narrower than the ODZ of the functions of the original equation. For example, adding to both parts the expressionx {\ displaystyle {\ sqrt {x}}}   we arrive at a corollary equation in which the nonnegativity of the variablex {\ displaystyle x}   can weed out existing negative roots, due to which later we will have to consider this restriction.

A somewhat inverse technique is also useful - highlighting a term, for example:x2+5x+6x+2=0⟺(x2+fourx+four)+(x+2)x+2=0⟺(x+2)2x+2+x+2x+2=0⟺(x+2)+one=0⟶xone=-3. {\ displaystyle {\ frac {x ^ {2} + 5x + 6} {x + 2}} = 0 \ Longleftrightarrow {\ frac {(x ^ {2} + 4x + 4) + (x + 2)} { x + 2}} = 0 \ Longleftrightarrow {\ frac {(x + 2) ^ {2}} {x + 2}} + {\ frac {x + 2} {x + 2}} = 0 \ Longleftrightarrow (x +2) + 1 = 0 \ longrightarrow x_ {1} = - 3.}  

Multiplication (division) by a nonzero constant (expression)

The multiplication of numerical equalities (that is, numerical equations) by the same non-zero numerical expression is a consequence of the possibility of adding this expression, and, therefore, extends its properties, adding, perhaps, the restriction on non-equality of the variable to zero [20] .

Using the previous example:x2+5x+6=0⟺(x2+fourx+four)+(x+2)=0⟺(x+2)2+(x+2)=0. {\ displaystyle x ^ {2} + 5x + 6 = 0 \ Longleftrightarrow (x ^ {2} + 4x + 4) + (x + 2) = 0 \ Longleftrightarrow (x + 2) ^ {2} + (x + 2) = 0.}  

Now we divide both terms into(x+2):(x+2)2x+2+x+2x+2=0⟺x+2+one=0⟶xone=-3. {\ displaystyle (x + 2): {\ frac {(x + 2) ^ {2}} {x + 2}} + {\ frac {x + 2} {x + 2}} = 0 \ Longleftrightarrow x + 2 + 1 = 0 \ longrightarrow x_ {1} = - 3.}  

However, dividing by this expression, we established a restriction - its inequality to zero:x+2≠0⟶x≠-2. {\ displaystyle x + 2 \ neq 0 \ longrightarrow x \ neq -2.}   Therefore, now it is necessary to check whether this value is the root of the original equation, eliminated by this very restriction:(-2)2+5⋅(-2)+6=four-10+6=0. {\ displaystyle (-2) ^ {2} +5 \ cdot (-2) + 6 = 4-10 + 6 = 0.}  

As you can see, the narrowing of the ODZ even by one point (number) can greatly distort the set of all possible feasible solutions.

Replacing Expressions

The identical replacement of a variable by another expression containing functions of a variable whose ODZ is no narrower than the ODZ of the functions of the original equation also always leads to an equivalent equation. Its very possibility and equivalence are based on the property of transitivity of numbers (if in a triple of numbers some two numbers are pairwise equal to the third, therefore, all three numbers are equal to each other [22] ).

The substitution is very often used in solving equations of any kind and even more (for example, for the third-degree equation there is a Vietton trigonometric formula , for finding antiderivatives - a universal trigonometric Weierstrass substitution , for integrals of rational functions - special Euler substitutions, etc.).

In fact, any formula of the roots of an equation is a special case of substitution when the expression replacing the variable does not contain variables at all (that is, the function in this expression contains constant (s) as argument (s)).

Replacing an expression also helps to arrive at a lighter equation. However, many often confuse the roots of the corollary equation with the roots of the original equation, mistakenly substituting them in the wrong equation for verification. So, for example, making a replacementx=ay {\ displaystyle x = a ^ {y}}   and getting a specific valuey0 {\ displaystyle y_ {0}}   as the root of the corollary equation with variabley {\ displaystyle y}   , to check, you must first substitutey0 {\ displaystyle y_ {0}}   into the replacement formulax=ay, {\ displaystyle x = a ^ {y},}   to calculatex0 {\ displaystyle x_ {0}}   , which will be the root of the original equation of the variablex {\ displaystyle x}   and which must be substituted in it for verification.

However, there are types of equations for which certain types of substitution cannot be made.

For example, an equation of the form:a(n)x=x,a≠0, {\ displaystyle a ^ {(n)} x = x, {\ text {}} a \ neq 0,}   Wherea(n)x {\ displaystyle a ^ {(n)} x}   Is an order hyperoperatorn {\ displaystyle n}   (for each of them there are additional restrictions ona {\ displaystyle a}   )

If you make a replacementx=a(n+one)y, {\ displaystyle x = a ^ {(n + 1)} y,}   then we get the equation-consequence:a(n)(a(n+one)y)=a(n+one)y⟺a(n+one)(y+one)=a(n+one)y. {\ displaystyle a ^ {(n)} {\ bigl (} a ^ {(n + 1)} y {\ bigr)} = a ^ {(n + 1)} y \ Longleftrightarrow a ^ {(n + 1 )} (y + 1) = a ^ {(n + 1)} y.}  

It follows that eithery+one=y {\ displaystyle y + 1 = y}   and there is no solution (which contradicts "theoretical practice"), or the hyperoperators are ambiguous (which is not true for the first three operators - addition , multiplication and exponentiation ).

For clarity, we assume thatn=3 {\ displaystyle n = 3}   :a(3)x=x⟺ax=x. {\ displaystyle a ^ {(3)} x = x \ Longleftrightarrow a ^ {x} = x.}   Make a replacementx=a(four)y=ya: {\ displaystyle x = a ^ {(4)} y = {} ^ {y} a:}  aya=ya⟺y+onea=ya, {\ displaystyle a ^ {{} ^ {y} a} = {} ^ {y} a \ Longleftrightarrow {} ^ {y + 1} a = {} ^ {y} a,}   where do we come to the contradictiony+one=y, {\ displaystyle y + 1 = y,}   although a solution to this initial equation exists and is expressed through a superroot of the second degree [23] .

Graduation

Благодаря возможности умножения числового выражения на числовое выражение становится возможным возведение числового выражения в ненулевую степень [20] , которое является частным случаем умножения при идентичности множителей. Однако, возведение в степень строго определено лишь для неотрицательных чисел, поэтому, возводя в степень выражение с переменной, необходимо указать соответствующее ограничение и учитывать его в дальнейшем.

Если всё-таки без возведения в степень отрицательного выражения не обойтись, то показатель степени должен быть целым числом, иначе такое преобразование приведёт к решению уже двух уравнений вместо одного и увеличению количества посторонних корней, поскольку: (-n)onek=-nk,k+one2∈Z,{\displaystyle (-n)^{\frac {1}{k}}=-{\sqrt[{k}]{n}},{\frac {k+1}{2}}\in \mathbb {Z} ,}   но в то же время onek=22k⟶(-n)22k=(-n)22k=one⋅n22k=nk,k∈Z.{\displaystyle {\frac {1}{k}}={\frac {2}{2k}}\longrightarrow (-n)^{\frac {2}{2k}}={\sqrt[{2k}]{(-n)^{2}}}={\sqrt[{2k}]{1\cdot n^{2}}}={\sqrt[{k}]{n}},k\in \mathbb {Z} .}   С иррациональными показателями ситуация пока что не определена.

Возведение в нулевую степень нуля (или выражения, которое может принимать нулевое значение) также невозможно (см. Неопределённость ).

Чётные показатели степени удваивают количество решаемых уравнений, поскольку показательные функции чётных степеней чётные . Количество посторонних корней также увеличивается [20] .

Логарифмирование

Согласно свойствам числовых нестрогих неравенств [21] , обе части уравнения можно логарифмировать . Однако, здесь тоже есть свои ограничения (для поля вещественных чисел):

  • Если логарифмирование осуществляется по положительному основанию-числу, то логарифмируемое выражение (число) также должно быть положительным;
  • Если логарифмирование осуществляется по отрицательному основанию-числу, то логарифмируемое выражение (число) также должно быть отрицательным (при этом доопределение логарифма нужно пояснить);
  • Логарифмирование выражений со значениями, противоположными по знаку значениям основания, невозможно.

Именно поэтому логарифмирование, как правило, приводит не к увеличению посторонних, а к потере истинных корней.

Потенцирование

В противоположность возведению в степень числовые равенства можно преобразовывать в показатели степени [20] : f(xone,x2...)=g(aone,a2,...)⟺bf(xone,x2...)=bg(aone,a2,...),b,aone,a2...=const.{\displaystyle f(x_{1},x_{2}...)=g(a_{1},a_{2},...)\Longleftrightarrow b^{f(x_{1},x_{2}...)}=b^{g(a_{1},a_{2},...)},b,a_{1},a_{2}...=const.}  

Тогда, как числовые выражения могут быть любыми, основание b{\displaystyle b}   должно быть положительно (или отрицательно — с наложением на переменную соответствующих ограничений).

Более того, потенцировать можно даже показатели степени у выражений, однако, при этом между основанием и степенью есть своеобразная ограничивающая взаимозависимость, из-за чего основание не может быть любым: fn(xone,x2,...)=gm(aone,a2,...)⟺f(kn)(xone,x2,...)=g(km)(aone,a2,...),ifk=(mn)onem-n.{\displaystyle f^{n}(x_{1},x_{2},...)=g^{m}(a_{1},a_{2},...)\Longleftrightarrow f^{(k^{n})}(x_{1},x_{2},...)=g^{(k^{m})}(a_{1},a_{2},...),{\text{if }}k={\biggl (}{\frac {m}{n}}{\biggr )}^{\frac {1}{mn}}.}  

Это легко доказывается следующим образом:

fn(xone,x2,...)=gm(aone,a2,...){\displaystyle f^{n}(x_{1},x_{2},...)=g^{m}(a_{1},a_{2},...)}  

f(kn)(xone,x2,...)=g(km)(aone,a2,...){\displaystyle f^{(k^{n})}(x_{1},x_{2},...)=g^{(k^{m})}(a_{1},a_{2},...)}  

f(xone,x2,...)=g(km)(kn)(aone,a2,...)⟺f(xone,x2,...)=g(km-n)(aone,a2,...){\displaystyle f(x_{1},x_{2},...)=g^{\frac {(k^{m})}{(k^{n})}}(a_{1},a_{2},...)\Longleftrightarrow f(x_{1},x_{2},...)=g^{(k^{mn})}(a_{1},a_{2},...)}  

Подставляем вместо f(xone,x2,...){\displaystyle f(x_{1},x_{2},...)}   получившееся выражение в исходное уравнение:

gn(km-n)(aone,a2,...)=gm(aone,a2,...),{\displaystyle g^{n(k^{mn})}(a_{1},a_{2},...)=g^{m}(a_{1},a_{2},...),}   отсюда получаем: n(km-n)=m.{\displaystyle n(k^{mn})=m.}   Further:

km-n=mn⟺k=(mn)onem-n.{\displaystyle k^{mn}={\frac {m}{n}}\Longleftrightarrow k={\biggl (}{\frac {m}{n}}{\biggr )}^{\frac {1}{mn}}.}   В случае m=one{\displaystyle m=1}   формула значительно упрощается:

k=(onen)oneone-n⟺k=(onenoneone-n)⟺k=n-oneone-n=nonen-one.{\displaystyle k={\biggl (}{\frac {1}{n}}{\biggr )}^{\frac {1}{1-n}}\Longleftrightarrow k={\Biggl (}{\frac {1}{n^{\frac {1}{1-n}}}}{\Biggr )}\Longleftrightarrow k=n^{-{\frac {1}{1-n}}}=n^{\frac {1}{n-1}}.}  

Возведение в тетрацию с показателем 2

Числовые выражения можно возводить в тетрацию с показателем 2 (то есть в степень самих себя):

f(xone,x2...)=g(aone,a2,...)⟺2f(xone,x2...)=2g(aone,a2,...)⟺(f(xone,x2,...))f(xone,x2,,,,)=(g(aone,a2,...))g(aone,a2,,,,),b,aone,a2...=const.{\displaystyle f(x_{1},x_{2}...)=g(a_{1},a_{2},...)\Longleftrightarrow {}^{2}{f(x_{1},x_{2}...)}={}^{2}{g(a_{1},a_{2},...)}\Longleftrightarrow {\bigl (}f(x_{1},x_{2},...){\bigr )}^{f(x_{1},x_{2},,,,)}={\bigl (}g(a_{1},a_{2},...){\bigr )}^{g(a_{1},a_{2},,,,)},b,a_{1},a_{2}...=const.}  

Разумеется, сюда же накладываются ограничения на положительность самих выражений или доопределения возведения в степень в случае их отрицательности.

Возведение в более высокие показатели тетрации накладывают определённые ограничения в виде взаимозависимостей выражений (см. выше), поскольку тогда будут иметь место так называемые " степенные башни" . Так же можно извлекать суперкорень с соответствующим показателем, но также стоит учитывать, что данная операция точно определена пока что только для положительных чисел.

Example: x2=four⟺2(x2)=2four⟺(x2)(x2)=fourfour⟺x2x2=256.{\displaystyle x^{2}=4\Longleftrightarrow {}^{2}(x^{2})={}^{2}4\Longleftrightarrow (x^{2})^{(x^{2})}=4^{4}\Longleftrightarrow x^{2x^{2}}=256.}  

Сделаем замену x=yone2:{\displaystyle x=y^{\frac {1}{2}}:}   (yone2)2(yone2)2=256⟺yy=256⟶y=four⟶xone=2.{\displaystyle (y^{\frac {1}{2}})^{2(y^{\frac {1}{2}})^{2}}=256\Longleftrightarrow y^{y}=256\longrightarrow y=4\longrightarrow x_{1}=2.}  

Однако, вследствие неопределённости тетрации при неположительных числах, у нас исчез второй корень уравнения: x2=-2.{\displaystyle x_{2}=-2.}  

Суперпотенцирование

Также благодаря возможности применения предыдущей итерации (возведения в степень), числовые равенства возможно преобразовывать в показатели тетрации :

f(xone,x2...)=g(aone,a2,...)⟺f(xone,x2...)b=g(aone,a2,...)b;b,aone,a2...=const.{\displaystyle f(x_{1},x_{2}...)=g(a_{1},a_{2},...)\Longleftrightarrow {}^{f(x_{1},x_{2}...)}b={}^{g(a_{1},a_{2},...)}b;b,a_{1},a_{2}...=const.}  

При этом стоит учитывать положительность основания b{\displaystyle b}   (поскольку даже ноль не может быть возведён в степень самого себя) и различные неопределённости (недоговорённости) нецелых и/или отрицательных показателей тетрации.

Эту тенденцию можно продолжить итерировать и далее (см. Пентация , Гипероператор ).

С точной уверенностью суперлогарифмировать числовые выражения пока нельзя по причине малоизученности свойств гипероператоров и обратных к ним функций, поскольку неясно, какие ограничения накладывает такое преобразование.

Специальные методы решения

Преобразования тригонометрических уравнений

Тригонометрическими называются уравнения, содержащие в качестве функций от переменных только тригонометрические функции (то есть уравнения, содержащие в себе композиции только тригонометрических функций).

При решении такого рода уравнений применяются различные тождества, основанные на свойствах самих тригонометрических функций (см. Тригонометрические тождества ). В этих преобразованиях, однако, стоит учитывать составную природу тангенса и котангенса, синус и косинус в составе которых являются независимыми друг от друга функциями от одной и той же переменной.

Так, сделав очевидную замену tg(x)=sin⁡(x)one-sin2⁡(x),{\displaystyle {\text{tg}}(x)={\frac {\sin(x)}{\sqrt {1-\sin ^{2}(x)}}},}   мы получим совершенно новую функцию, значения которой будут отличаться от исходного соотношения тангенса: tg(x)=sin⁡(x)cos⁡(x){\displaystyle {\text{tg}}(x)={\frac {\sin(x)}{\cos(x)}}}   (см. графики ниже).

 
График функции y=tg(x) без замены (слева) и с заменой косинуса на синус (справа)


Такое изменение происходит из-за того, что формуле с заменой подразумевается арифметический корень , значение которого всегда неотрицательно. Однако, если бы мы подписали "±", функция тангенса потеряла бы присущую ей однозначность.

  • sin⁡(x)=n⟶x=(-one)karcsin⁡(n)+πk,n∈[-one;one],k∈Z;{\displaystyle \sin(x)=n\longrightarrow x=(-1)^{k}\arcsin(n)+\pi k,{\text{ }}n\in [-1;1],{\text{ }}k\in \mathbb {Z} ;}  
  • cos⁡(x)=n⟶x=±arccos⁡(n)+2πk,n∈[-one;one],k∈Z;{\displaystyle \cos(x)=n\longrightarrow x=\pm \arccos(n)+2\pi k,{\text{ }}n\in [-1;1],{\text{ }}k\in \mathbb {Z} ;}  
  • tg(x)=n⟶x=arctg(n)+πk,n∈[-one;one],k∈Z;{\displaystyle {\text{tg}}(x)=n\longrightarrow x={\text{arctg}}(n)+\pi k,{\text{ }}n\in [-1;1],{\text{ }}k\in \mathbb {Z} ;}  
  • ctg(x)=n⟶x=arcctg(n)+πk,n∈[-one;one],k∈Z.{\displaystyle {\text{ctg}}(x)=n\longrightarrow x={\text{arcctg}}(n)+\pi k,{\text{ }}n\in [-1;1],{\text{ }}k\in \mathbb {Z} .}  

Решим в качестве примера уравнение посложнее: sin⁡(x)cos⁡(x)cos⁡(2x)=one8.{\displaystyle \sin(x)\cos(x)\cos(2x)={\frac {1}{8}}.}  

Because sin⁡(x)cos⁡(x)=one2sin⁡(2x),{\displaystyle \sin(x)\cos(x)={\frac {1}{2}}\sin(2x),}   то получаем: one2sin⁡(2x)cos⁡(2x)=one8.{\displaystyle {\frac {1}{2}}\sin(2x)\cos(2x)={\frac {1}{8}}.}  

Умножим на 4 и опять получим синус двойного угла: 2sin⁡(2x)cos⁡(2x)=one2⟺sin⁡fourx=one2.{\displaystyle 2\sin(2x)\cos(2x)={\frac {1}{2}}\Longleftrightarrow \sin {4x}={\frac {1}{2}}.}  

Окончательная формула корней: x0,one,...=(-one)karcsin⁡one2+πkfour=(-one)kπ24+πkfour,k∈Z.{\displaystyle x_{0,1,...}=(-1)^{k}{\frac {\arcsin {\frac {1}{2}}+\pi k}{4}}=(-1)^{k}{\frac {\pi }{24}}+{\frac {\pi k}{4}},{\text{ }}k\in \mathbb {Z} .}  

Преобразования дифференциальных и интегральных уравнений

Дифференциальные уравнения — это, как правило, уравнения, содержащие в себе числовые функции и их производные. Таким образом, все преобразования, выполняемые над числовыми уравнениями, распространяются и на эти типы уравнений. Главное — помнить, что лучше проводить такие преобразования, в которых области допустимых значений входящих в уравнение функций не изменялись совсем. Отличительной особенностью дифференциальных уравнений от числовых является возможность их интегрирования (дифференцирования) по обе стороны от знака равенства.

Дифференциальные уравнения, так же как и числовые, решается аналитическим способом (символьное интегрирование) при поиске первообразной функции или численным — при вычислении определённого интеграла на каком-либо отрезке. Ниже приведены основные и наиболее часто используемые преобразования для нахождения аналитического решения.

Большинство типов дифференциальных уравнений можно привести к уравнениям с разделяющимися переменными, общее решение которых уже известно [24] . К числу таких преобразований можно отнести [24] :

  • Приведение однородных уравнений заменой y(x)=xz(x){\displaystyle y(x)=xz(x)}   при x>0;{\displaystyle x>0;}  
  • Приведение квазиоднородных уравнений к однородным заменой y(x)=zβα,{\displaystyle y(x)=z^{\frac {\beta }{\alpha }},}   а затем — к уравнениям с разделяющимися переменными.

Линейные дифференциальные уравнения, как правило, решаются тремя методами [24] :

  • Метод интегрирующего множителя ;
  • Метод Лагранжа (вариационной постоянной) ;
  • Метод Бернулли .

Дифференциальные уравнения Бернулли также сводятся либо к линейным, либо к уравнениям с разделяющимися переменными с помощью замен [25] .

Однородные дифференциальные уравнения второго и выше порядков решаются путём замены функции y(x)=ekx{\displaystyle y(x)=e^{kx}}   и переходу таким способом к решению характеристического алгебраического уравнения от переменной k{\displaystyle k}   степени, равной порядку исходного дифференциального уравнения.

Существуют типы дифференциальных уравнений высших порядков, порядок которых можно понизить заменой производной какого-либо порядка на другую функцию. Таким же образом они могут быть сведены к уравнениям с разделяющимися переменными.

Интегральные уравнения являются более сложными, чем дифференциальные, но в своих решениях, как и они, часто содержат интегральные преобразования:

  • Преобразование Фурье ;
  • Преобразование Лапласа ;
  • Преобразование Хартли ;
  • Интегральное преобразование Абеля ;
  • Идентичное преобразование ;
  • и другие (см. Интегральные преобразования#Таблица преобразований ).

Помимо дифференциальных и интегральных существует также смешанный тип — интегро-дифференциальные уравнения , основным направлением решения которых является их сведение к двум предыдущим типам уравнений различными методами.

Преобразования функциональных уравнений

Общего решения функциональных уравнений не существует, как и общих методов. Сами по себе функциональные уравнения являются свойствами своего решения — функции или типа функций. Например, решением функционального уравнения Абеля α(f(x))=α(x)+one,f(x)=ax{\displaystyle \alpha (f(x))=\alpha (x)+1,{\text{ }}f(x)=a^{x}}   является функция α(x)=slogax.{\displaystyle \alpha (x)={\text{slog}}_{a}x.}   [26]

Численные методы решения уравнений

Данные методы представляют собой отдельную совокупность алгоритмов получения решения конкретного уравнения с заданной точностью. Основные отличия от аналитического решения:

  • Погрешность вычисления (при аналитическом способе иррациональные числа доступны в виде формул от рациональных, в связи с чем при желании могут быть вычислены с любой точностью для любых частных случаев);
  • Универсальность применения (одни и те же числовые методы могут быть применены к совсем разного типа уравнениям);
  • Возобновляемость процесса решения (для каждого конкретного случая одного вида уравнения метод необходимо применять заново и с самого начала, в отличие от аналитического решения, зная которое, для вычисления корней достаточно подставить нужные коэффициенты в уже известную, т.е. полученную раннее, формулу);
  • Необходимость использования дополнительного оборудования (таких, как калькуляторы и программные продукты; аналитические решения придумываются "из головы", хотя существуют специальные сайты или устанавливаемое ПО, способные вывести формулы уже известных аналитических решений).

Метод бисекции (дихотомии)

Этот численный метод решения уравнения основан на противоположности знаков непрерывной функции около её нуля. Сам алгоритм довольно прост:

  1. Берётся отрезок, на концах которого функция даёт противоположные по знаку значения;
  2. Отрезок разбивается пополам, после чего значение функции в середине отрезка умножается на значения его концов: отрицательный результат приводит к сужению изначального отрезка от бывшей середины до того конца, в котором произведение было отрицательным;
  3. Новый отрезок снова делим пополам и повторяем процедуру до тех пор, пока отрезок не достигнет заданной точности.

Пример: найдём положительный корень уравнения 2x=x2+2.{\displaystyle 2^{x}=x^{2}+2.}   Для этого перепишем уравнение в функцию: f(x)=2x-x2-2.{\displaystyle f(x)=2^{x}-x^{2}-2.}   Построив график этой функции легко убедиться, что искомое значение лежит в отрезке [four;5].{\displaystyle [4;{\text{ }}5].}   Найдём значения функции от концов этого отрезка и его середины: f(four)=-2;{\displaystyle f(4)=-2;}  f(5)=5;{\displaystyle f(5)=5;}  f(four,5)≈0,377416997969519, {\ displaystyle f (4,5) \ approx 0.377416997969519,}   - as you can see, the product of valuesf(four) {\ displaystyle f (4)}   andf(four,5) {\ displaystyle f (4,5)}   gives a negative result, unlikef(four,5)⋅f(5). {\ displaystyle f (4,5) \ cdot f (5).}   Now the segment in which the root lies is reduced:[four;four,5]. {\ displaystyle [4; {\ text {}} 4,5].}   We repeat the procedure again (in this case, the values ​​of the function at the ends are already known from previous calculations):f(four,25)≈-one,035186159956460, {\ displaystyle f (4,25) \ approx -1,035186159956460,}   - now the segment is reduced "to the other side":[four,25;four,5]. {\ displaystyle [4.25; {\ text {}} 4.5].}   Next cycle:f(four,375)≈-0,391192125583853, {\ displaystyle f (4,375) \ approx -0,391192125583853,}   - we get a new segment:[four,375;four,5]. {\ displaystyle [4,375; {\ text {}} 4,5].}   The cycle continues to the required accuracy, and then, as the approximate value of the root, the end of the segment is selected whose function value is closest to zero. In our example, the value 4.44129 will be the root of the original equation up to the fifth decimal place.

Chord Method (Secant)

An iterative numerical method for finding the root of an equation with a given accuracy, which is based on a constant approximation to the root through the intersection of the chords with the abscissa axis. The following formula is used here:

xi+one=xi-f(xi)⋅(xi-x0)f(xi)-f(x0),{\ displaystyle x_ {i + 1} = x_ {i} - {\ dfrac {f (x_ {i}) \ cdot (x_ {i} -x_ {0})} {f (x_ {i}) - f (x_ {0})}},}   however, it has a low convergence rate; therefore, the algorithm is used more often instead:

xi+one=xi-one-f(xi-one)⋅(xi-xi-one)f(xi)-f(xi-one);{\ displaystyle x_ {i + 1} = x_ {i-1} - {\ dfrac {f (x_ {i-1}) \ cdot (x_ {i} -x_ {i-1})} {f (x_ {i}) - f (x_ {i-1})}};}   in different sources, both of these formulas are called differently - the chord method and / or the secant method.

The general algorithm for using the method in the geometric sense has the form:

  1. First you need to make sure that the function of the equation is continuous, and on the interval under consideration there is only one root and there are no zeros of the derivative (otherwise, the calculation may not converge at all);
  2. Then select two points that belong to the graph of the function (lying on it), the abscissas of which are in the given interval and the values ​​of the function in which are opposite in sign;
  3. Both of these points are connected, forming a chord (secant), the point of intersection of the chord with the abscissa axis is calculated;
  4. Perpendicular to the abscissa axis is drawn from the intersection point to the function graph (projection of the intersection point on the function graph);
  5. The resulting point on the graph of the function with the opposite end of the existing chord is connected, forming a new chord, for which it will also be necessary to calculate the intersection point with the abscissa axis ... etc.

Newton's Method

The main idea of ​​the Newton method is to use the iterative approximation of a differentiable function according to the following algorithm [27] :

f′(xn)=tgαn=ΔyΔx=0-f(xn)xn+one-xn⟶xn+one=xn-f(xn)f′(xn){\ displaystyle f '(x_ {n}) = {\ text {tg}} \ alpha _ {n} = {\ frac {\ Delta y} {\ Delta x}} = {\ frac {0-f (x_ {n})} {x_ {n + 1} -x_ {n}}} \ longrightarrow x_ {n + 1} = x_ {n} - {\ frac {f (x_ {n})} {f '(x_ {n})}}}  

First you need to make sure that the function, which is equal to zero in this equation, satisfies some criteria , restrictions and conditions for the applicability of this method, then make sure that there are no other unknown roots next to the unknown root (otherwise, you can simply "get confused" ) Now select the value of the variablexn {\ displaystyle x_ {n}}   close to the root (the closer the better), and substitute it in the above formula. There are two possible outcomes:

  1. If the received valuexn+one {\ displaystyle x_ {n + 1}}   lies in the same interval as the desired root, it can again be substituted into the formula: each next value is more accurate than the previous one;
  2. If the received valuexn+one {\ displaystyle x_ {n + 1}}   does not lie in the same interval as the desired root, it is necessary to replacexn+one {\ displaystyle x_ {n + 1}}   onxn+xn+one2 {\ displaystyle {\ frac {x_ {n} + x_ {n + 1}} {2}}}   until the new value returns to the interval.

The iterative process continues until the obtained approximation of the required root of the equation reaches the required accuracy.

Simple Iteration Method

Summarizing the method of chords (secants) and Newton's method, we can conclude that they are both a variation of the same algorithm. It can be described as follows:

  1. The equationf(x)=0 {\ displaystyle f (x) = 0}   reduced to the form:x=φ(x) {\ displaystyle x = \ varphi (x)}   , - now you can write the iterative formula asxi+one=φ(xi); {\ displaystyle x_ {i + 1} = \ varphi (x_ {i});}  
  2. Functionφ(xi) {\ displaystyle \ varphi (x_ {i})}   must be selected in accordance with the conditions of convergence of the method, usuallyφ(xi)=xi-λ(x)f(xi), {\ displaystyle \ varphi (x_ {i}) = x_ {i} - \ lambda (x) f (x_ {i}),}   as an independentλ(x) {\ displaystyle \ lambda (x)}   can choose a constantλ0, {\ displaystyle \ lambda _ {0},}   the sign of which coincides with the sign of the derivativef′(x) {\ displaystyle f '(x)}   on the segment connecting the true root and the first valuex0. {\ displaystyle x_ {0}.}  

In particular, settingλ0=onef′(x0), {\ displaystyle \ lambda _ {0} = {\ frac {1} {f '(x_ {0})}},}   we come to an algorithm called the single tangent method; and whenλ(x)=onef′(x) {\ displaystyle \ lambda (x) = {\ frac {1} {f '(x)}}}   get the same Newton method.

Example: find the approximation of the root of the equation0,25sin⁡(x)-x-π=0. {\ displaystyle 0.25 \ sin (x) -x- \ pi = 0.}   First, we define a functionφ(x) {\ displaystyle \ varphi (x)}   and expressx {\ displaystyle x}   through her:

x=0,25sin⁡(x)-π⟶φ(x)=0,25sin⁡(x)-π,{\ displaystyle x = 0.25 \ sin (x) - \ pi \ longrightarrow \ varphi (x) = 0.25 \ sin (x) - \ pi,}   - now it is necessary to verify whether the obtained function meets the convergence condition, -|φ′(x)|<one: {\ displaystyle | \ varphi '(x) | <1:}  

φ′(x)=0,25cos⁡(x)⟶|0,25cos⁡(x)|<one,{\ displaystyle \ varphi '(x) = 0.25 \ cos (x) \ longrightarrow | 0.25 \ cos (x) | <1,}   butcos⁡(x)∈[-one;one]∀x. {\ displaystyle \ cos (x) \ in {[-1; 1]} {\ text {}} \ forall x.}  

Now it remains to choose a value for the first iteration, close to the root (the closer, the faster the convergence of the method). Let bex0=-3, {\ displaystyle x_ {0} = - 3,}   thenφ(-3)=xone=0,25sin⁡(-3)-π≈-3,176872655604760. {\ displaystyle \ varphi (-3) = x_ {1} = 0.25 \ sin (-3) - \ pi \ approx -3.176872655604760.}  

Repeat the procedure for the new value:φ(xone)=x2≈0,25sin⁡(-3,176872655604760)-π≈-3,132774482649750 ... {\ displaystyle \ varphi (x_ {1}) = x_ {2} \ approx 0.25 \ sin (-3.176872655604760) - \ pi \ approx -3,132774482649750 ...}  

Having completed in this way 22 steps of the iteration, we get the approximationx22≈-3,141592653589790, {\ displaystyle x_ {22} \ approx -3.141592653589790,}   for which the equality is true up to the fifteenth decimal place:x22=-π {\ displaystyle x_ {22} = - \ pi}   . Verification:0,25sin⁡(π)-(-π)-π=0,25⋅0+π-π=0⟺0=0. {\ displaystyle 0.25 \ sin (\ pi) - (- \ pi) - \ pi = 0.25 \ cdot 0+ \ pi - \ pi = 0 \ Longleftrightarrow 0 = 0.}  

Note that the convergence rate also depends on the function itself. So, if instead of the multiplier0,25 {\ displaystyle 0.25}   we will put0,5 {\ displaystyle 0.5}   , then with the same initial valuex0 {\ displaystyle x_ {0}}   and the level of error, the number of steps will increase from 22 to 44.

Solution Verification Methods

Verification of the solution is necessary to determine whether or not the obtained solution is true and / or extraneous. The equation is a special case of the problem; therefore, similar verification methods apply to them, namely [28] :

  • Testing the solution algorithm is the main method for checking the course of the solution, which consists in justifying the logic of all the performed mathematical actions of the algorithm (i.e., their consistency with the mathematical theories within which the equation is solved).

However, the verification of the algorithm is not always possible or not in full, moreover, errors may also be made when performing the verification itself, and this method almost never checks the completeness of the solution. In such cases, other methods are used, such as, for example, [28] :

  • Substitution of the roots in the original equation is to verify that the equation is identical for a given solution (however, infinite sets of solutions cannot be verified in this way).
  • Checking for compliance with ODZ does not guarantee the correctness and completeness of the solution, but determines their truthfulness and helps to avoid additional solutions (and, therefore, checks) if extraneous roots appear.
  • The solution is checked for simple and / or limiting cases on an analytical solution in order to prove its universality or the presence of restrictive functions in it, i.e. find the range of possible solutions to this particular type of equation.
  • Checking the correspondence of the structure of the solution to the structure of the equation allows one to determine in advance additional possible solutions of the equation based on the properties of the functions included in the equation, such as symmetry , parity , recurrence , etc.
  • An alternative solution is useful when you need to check some algorithm (analytical solution), thanks to this method, new formulas, relationships and interdependencies of already known functions are opened.

Screening methods for extraneous roots

Criteria for the existence of admissible solutions of equations

Notes

  1. ↑ Closed-form expression (English) // Wikipedia. - 2018-06-06.
  2. ↑ Kudryashova T. G. Methods for solving mathematical problems. 5th grade. - M .: Non-commercial partnership “Volnoe delo”, 2008. - S. 132. - 208 p. - ISBN 978-5-90415-801-9 , BBK 22.1ya721, K-88.
  3. ↑ Baklanova E. A. Solving text problems in various ways // Festival of educational ideas “Open lesson”: website. - 2012.
  4. ↑ Farrugia PS, Mann RB, Scott TC N-body Gravity and the Schrödinger Equation (English) // Class. Quantum Grav. . - 2007. - Vol. 24, no. 18 . - P. 4647-4659. - DOI : 10.1088 / 0264-9381 / 24/18/006 .
  5. ↑ Kolmogorov A.N., Abramov A.M., Dudnitsyn Yu.P., Ivlev B.M., Schwarzburd S.I. Chapter IV. Exponential and logarithmic functions, par. 10 - Exponential and logarithmic functions, p. 40 - The concept of inverse functions // Algebra and the beginning of mathematical analysis: textbook. for 10-11 cl. general education. institutions / ed. A.N. Kolmogorova. - 17th ed. - M .: Education, 2008 .-- S. 246-247. - ISBN 978-5-09-019513-3 .
  6. ↑ Mordkovich A. G., Semenov P. V. Chapter 4. Trigonometric equations, par. 23 - Methods for solving trigonometric equations, p. 2. - Method of factoring // Algebra and the beginning of mathematical analysis. Grade 10. At 2 hours. Part 1. A textbook for students of educational institutions (profile level) . - 6th ed. - M .: Mnemosyne, 2009 .-- S. 191. - ISBN 978-5-346-01201-6 .
  7. ↑ Mordkovich A.G. Chapter 4. Quadratic equations, Sec. 30 - Irrational equations // Algebra. 8th grade. At 2 hours, part 1. Textbook for students of educational institutions. - 12th. - M .: Mnemosyne, 2010 .-- S. 175. - ISBN 978-5-346-01427-0 .
  8. ↑ Exhibitor (Russian) // Wikipedia. - 2017-12-21.
  9. ↑ Quadratic function // Big school encyclopedia. - M .: "Russian Encyclopedic Partnership", 2004. - S. 118-119.
  10. ↑ Riemann Integral (Russian) // Wikipedia. - 2017-03-11.
  11. ↑ Differentiable function (Russian) // Wikipedia. - 2018-05-20.
  12. ↑ Makarychev Yu.N., Mindyuk N.G., Neshkov K.I., Suvorova S.B. Functions par. 5 - Functions and their graphs, p. 14 - Function graph // Algebra. Grade 7: textbook for general education. institutions / ed. S. A. Telyakovsky. - 18th ed. - M .: Education, 2009. - S. 60. - ISBN 978-5-09-021255-7 .
  13. ↑ I.I. Zhogin. About secondary // Mathematical education: journal / ed. I. N. Bronstein, A. M. Lopshits, A. A. Lyapunov, A. I. Markushevich, I. M. Yagloma. - M .: State publishing house of physical and mathematical literature, 1961. - Issue. 6 . - S. 217 .
  14. ↑ Joseph Gerver. The Differentiability of the Riemann Function at Certain Rational Multiples of π // American Journal of Mathematics. - 1970.- T. 92 , no. 1 . - S. 33–55 . - DOI : 10.2307 / 2373496 .
  15. ↑ Mordkovich A. G., Semenov P. V. Chapter 6. Equations and inequalities. Systems of equations and inequalities, par. 27 - General methods for solving equations // Algebra and the beginning of analysis. Grade 11. At 2 hours, part 1. Textbook for educational institutions (profile level) . - M .: Mnemosyne, 2007 .-- S. 211-218. - ISBN 5-346-729-6.
  16. ↑ Savin A.P. Zero // Encyclopedic Dictionary of the Young Mathematician / comp. A.P. Savin. - M .: "Pedagogy", 1989. - S. 219.
  17. ↑ Mathematics of the 17th Century // History of Mathematics / Edited by A.P. Yushkevich, in three volumes. - M .: Nauka, 1970 .-- T. II. (unspecified) . ilib.mccme.ru. Date of treatment June 3, 2018.
  18. ↑ Superroot (Russian) // Wikipedia. - 2018-05-31.
  19. ↑ R. Bruce King. Chapter 8. Beyond the Quintic Equation // Beyond the Quartic Equation . - Birkhäuser Boston, 2008 .-- pp. 139-149. - 149 p. - (Modern Birkhäuser Classics). - ISBN 0817648364 .
  20. ↑ 1 2 3 4 5 Mordkovich A. G., Semenov P. V. Chapter 6. Equations and inequalities. Systems of equations and inequalities, par. 26 - Equivalence of equations // Algebra and the beginning of analysis. Grade 11. At 2 hours, part 1. Textbook for educational institutions (profile level) . - M .: Mnemosyne, 2007 .-- S. 201-211. - ISBN 5-346-729-6.
  21. ↑ 1 2 Inequalities // Mathematical Encyclopedia (in 5 volumes) . - M .: Soviet Encyclopedia , 1982. - T. 3. - S. 999.
  22. ↑ Equality to the third (Russian) // Wikipedia. - 2017-02-21.
  23. ↑ Superroot (Russian) // Wikipedia. - 2018-06-22.
  24. ↑ 1 2 3 Ordinary differential equation (Russian) // Wikipedia. - 2018-05-27.
  25. ↑ Bernoulli Differential Equation (Russian) // Wikipedia. - 2017-04-06.
  26. ↑ Superlogarithm (Russian) // Wikipedia. - 2018-07-06.
  27. ↑ Newton's method (Russian) // Wikipedia. - 2018-05-21.
  28. ↑ 1 2 Khudak Yu. I., Aslanyan A. G. Verifying the correctness of the task is an important stage in the education and upbringing of the schoolchild (ru, en) // Publishing House of the Federal State Autonomous Educational Institution of Higher Education “Peoples' Friendship University of Russia” (RUDN): website. - S. 25-30 .

Literature

  • Markushevich, L. A. Equations and inequalities in the final repetition of a course of high school algebra / L. A. Markushevich, R. S. Cherkasov. / Mathematics at school. - 2004. - No. 1.
  • The equation is an article from the Great Soviet Encyclopedia .
  • Equations // Collier Encyclopedia. - An open society. 2000.
  • Equation // Encyclopedia Circumnavigation
  • Equation // Mathematical Encyclopedia. - M .: Soviet Encyclopedia. I.M. Vinogradov. 1977-1985.
Source - https://ru.wikipedia.org/w/index.php?title= Equation_ Solution&oldid = 99167492


More articles:

  • Upper Hutt
  • New Zealand Prototroct
  • Holy Communion Cathedral
  • Oranjemund
  • Tell Tayninat
  • Apchinaev
  • Kiddish
  • Biblical Hermeneutics
  • Arbaty
  • SAP Open 2010

All articles

Clever Geek | 2019