Algorithm theory is a science at the intersection of mathematics and computer science , studying the general properties and laws of algorithms and various formal models for their representation. The problems of the theory of algorithms include formal proof of algorithmic unsolvability of problems, complexity of algorithms , classification of algorithms according to complexity classes , development of criteria for the comparative evaluation of the quality of algorithms, etc. Together with mathematical logic, the theory of algorithms forms the theoretical basis of the computational sciences [ 1] [2] .
History
The development of the theory of algorithms begins with the proof by Kurt Gödel of theorems on the incompleteness of formal systems, including arithmetic, the first of which was proved in 1931 . The assumption arising in connection with these theorems about the impossibility of an algorithmic solution to many mathematical problems (in particular, the problems of derivability in predicate calculus ) made it necessary to standardize the concept of an algorithm. The first standardized versions of this concept were developed in the 1930s by Alan Turing , Emil Post and Alonzo Church . The Turing machine, the Post machine, and the lambda calculus that they proposed were equivalent to each other. Based on the work of Godel, Stephen Kleene introduced the concept of a recursive function , which also turned out to be equivalent to the above.
One of the most successful standardized versions of the algorithm is the concept of a normal algorithm introduced by Andrei Markov . It was developed ten years later by Turing, Post, Church, and Kleene in connection with the proof of the algorithmic unsolvability of a number of algebraic problems.
In subsequent years, Donald Knuth , Alfred Aho and Jeffrey Ullman made a significant contribution to the theory of algorithms.
Computation Models
- Turing machine - abstract performer (abstract computing machine). It was proposed by Alan Turing in 1936 to formalize the concept of an algorithm . A Turing machine is an extension of a finite state machine and, according to the Church - Turing thesis , is able to simulate all other performers (by setting transition rules) that somehow implement a step-by-step calculation process in which each step of the calculation is fairly elementary.
- Lambda calculus - a pair is considered: a λ-expression and its argument - and the calculation is the application, or application, of the first member of the pair to the second. This allows you to separate the function and what it applies to. In a more general case, chains are considered to be computations starting with the original λ-expression, followed by a finite sequence of λ-expressions, each of which is obtained from the previous one by applying β-reduction , i.e., substitution rules.
- Combinatorial logic - the interpretation of the calculation is similar to the λ-calculus, but there are important differences (for example, the fixed-point combinator Y has a normal form in combinatorial logic, but not in the λ-calculus). Combinatorial logic was originally developed to study the nature of paradoxes and to build conceptually clear foundations of mathematics, and the idea of a variable was completely excluded, which helped clarify the role and place of variables in mathematics .
- Register machines , in particular, a RAM machine, are an abstract computing machine that simulates a computer with random access to memory. It is this calculation model that is most often used in the analysis of algorithms.
Church - Turing thesis and algorithmically insoluble problems
Alan Turing suggested (known as the Church-Turing thesis ) that any algorithm, in the intuitive sense of the word, could be represented by an equivalent Turing machine . The refinement of the notion of computability on the basis of the concept of such a machine (and other equivalent concepts) has opened up the possibility of rigorous proof of the algorithmic unsolvability of various mass problems (problems of finding a single method for solving a certain class of problems whose conditions can vary within certain limits).
The simplest example of an algorithmically unsolvable mass problem is the problem of applicability of the algorithm, the problem of stopping , which is as follows: you need to find a general method that would allow for an arbitrary Turing machine (defined by its program) and an arbitrary initial state of the tape of this machine to determine whether the work will be completed cars in a finite number of steps, or will it continue indefinitely?
During the first decade of the history of the theory of algorithms, insoluble mass problems were discovered only in it itself (including the “applicability problem” described above) and in mathematical logic (the “derivability problem” in the classical predicate calculus). Therefore, it was believed that the theory of algorithms is a “roadside” of mathematics that does not matter for such classical sections as “algebra” or “analysis” . The situation changed after Andrei Markov and Emil Post in 1947 established the algorithmic unsolvability of the equality problem known in algebra for finitely generated and finitely defined semigroups ( Thue problem ). Subsequently, the algorithmic unsolvability of many other "purely mathematical" mass problems was established, the most famous result in this area is the algorithmic unsolvability of Hilbert's tenth problem proved by Yuri Matiyasevich .
Destinations
The theory of algorithms develops mainly in three directions:
- Classic:
- Formal formulation of tasks;
- The concept of a resolution problem ;
- Classification of difficulty levels ( "P" , "NP" and others).
- Analysis:
- Asymptotic :
- Estimation of resource consumption and runtime (in particular, for recursive algorithms);
- Estimating the growth in resource requirements (for example, runtime) with increasing data volume.
- Practical:
- Obtaining “explicit” labor-intensive functions;
- Interval analysis of functions;
- Search for quality criteria;
- Methodology of rational choice.
- Asymptotic :
Analysis of the complexity of algorithms
The purpose of the analysis is to find the optimal algorithm. The criterion is the complexity (the number of elementary operations required by the algorithm). Labor input function - the ratio of labor input to input data.
The complexity of the input data can be affected to varying degrees:
- Volume;
- Values;
- The order of admission.
One of the simplified types of cost analysis of algorithms is asymptotic, with a large amount of input data. The used estimate of the labor-intensive function is the “complexity” of the algorithm , which allows one to determine the relationship of labor-intensiveness with the amount of data. Asymptotic analysis of algorithms uses the notation adopted in mathematical asymptotic analysis. The following are basic difficulty ratings.
The main evaluation of the complexity function of the algorithm (where n is the amount of data volume, “input length”) is :
if for g> 0 for n> 0 there exist positives with 1 , c 2 , n 0 , such that:
at ; in other words, one can find such and , that, for sufficiently large , - will be concluded between:
- and .
In this case, they also say that the function is an asymptotically exact estimate of the function , since by definition the function no different from function up to a constant factor (see asymptotic equality ). For example, for the heapsort sorting method, the complexity estimate is:
- , i.e .
Of should .
is not a function, but a set of functions describing growth up to a constant factor. gives both upper and lower estimates of the growth of function. Often it is necessary to consider these estimates separately. Rating represents the upper asymptotic estimate of the complexity of the algorithm. We say that , if:
- .
In other words, the record means that belongs to a class of functions that grow no faster than a function up to a constant factor.
Rating defines a lower asymptotic estimate for the growth of a function and defines a class of functions that grow no slower than up to a constant factor. , if:
- .
For example, record denotes a class of functions that grow no slower than ; all polynomials with a degree greater than one fall into this class, as well as all power functions with a base greater than one.
Equality performed if and only if and .
Asymptotic analysis of algorithms is not only practical, but also theoretical. So, for example, it was proved that all sorting algorithms based on pairwise comparison of elements sort n elements in a time no less than .
An important role in the development of asymptotic analysis of algorithms was played by Alfred Aho , Jeffrey Ullman , John Hopcroft .
Difficulty classes
Within the framework of the classical theory, tasks are classified according to their complexity ( P-complex , NP-complex , exponentially complex, and others):
- “P” - can be solved in a time that polynomially depends on the volume of the source data using a deterministic computer (for example, a “Turing machine” );
- "NP":
- Tasks whose solution is feasible in polynomially expressed time using a non-deterministic computer (the next state of which is not always uniquely determined by the previous ones). Her work can be represented as a process branching out at each ambiguity: the problem is solved if at least one branch has reached an answer;
- Problems whose solution with the help of additional information of polynomial length given to us above can be verified in polynomial time. In particular, the class “NP” includes all problems whose solution can be verified in polynomial time.
Class “P” is contained in “NP”. A classic example of an NP-problem is the “Salesman Problem” .
Since “P” is contained in “NP”, the affiliation of a task to the class “NP” often reflects our current understanding of how to solve this problem and is not conclusive. In the general case, there is no reason to believe that a P-solution cannot be found for a particular NP-problem. The question of the possible equivalence of the classes “P” and “NP” (about the possibility of finding a P-solution for any NP-problem) is considered one of the main algorithms in the modern theory of complexity; no answer has been found so far. Its statement itself is possible due to the introduction of the concept of NP-complete problems ; any NP-problems can be reduced to them - and if a P-solution is found for an NP-complete problem, then a P-solution will be found for all NP-problems. An example of an NP-complete problem is the “Conjunctive Form Problem” .
Studies of the complexity of algorithms made it possible to take a fresh look at the solution of many classical mathematical problems and find solutions for some of them (multiplying polynomials and matrices, solving linear systems of equations, and others) that require less resources than traditional ones.
Mathematical Applications of Algorithm Theory
Some applications of the theory of algorithms:
- study of mass problems;
- applications to the foundations of mathematics : constructive semantics ;
- applications to mathematical logic : analysis of formalized languages of logic and arithmetic ;
- computable analysis;
- numbered structures;
- applications to probability theory : determining a random sequence;
- applications to information theory : an algorithmic approach to the concept of the amount of information;
- assessment of the difficulties of solving individual problems;
- the influence of the theory of algorithms on algorithmic practice [3] .
Notes
- ↑ Semenov A. L. , Uspensky V. A. Mathematical logic in the computational sciences and computational practice. // Bulletin of the USSR Academy of Sciences , No. 7, p. 93 - 103
- ↑ Uspensky V. A. , Semenov A. L. Solvable and unsolvable algorithmic problems. // Quantum , 1985, No. 7, p. 9 - 15
- ↑ V. A. Uspensky , A. L. Semenov Theory of Algorithms: Basic Discoveries and Applications, M., Nauka , 1987, 288 p.
Literature
- Thomas H. Cormen, Charles I. Leiserson, Ronald L. Rivest, Clifford Stein. Algorithms: construction and analysis = Introduction To Algorithms. - 2nd ed. - M .: Williams, 2006 .-- S. 1296. - ISBN 0-07-013151-1 .
- Donald Knut . The Art of Programming, Volume 1. Basic Algorithms = The Art of Computer Programming, vol. 1. Fundamental Algorithms. - 3rd ed. - M .: Williams, 2006 .-- S. 720. - ISBN 0-201-89683-4 .
- Kolmogorov A.N. Information Theory and Algorithm Theory. - M .: Nauka, 1987 .-- 304 p.
- Markov A.A., Nagorny N.M. Theory of algorithms. - 2nd ed .. - M .: Fazis, 1996.