The question of determining whether a natural number is simple , known as the simplicity problem.
A simplicity test (or simplicity test) is an algorithm that, taking a number at the input , either allows you to not confirm the assumption of the complexity of the number, or to precisely state its simplicity. In the second case, it is called a true simplicity test. Thus, the simplicity test represents only the hypothesis that if the algorithm does not confirm the assumption that the number is composite , then this number may be prime with a certain probability . This definition implies less confidence in the consistency of the test result with the true state of things than a true simplicity test, which gives a mathematically confirmed result.
Introduction
Discrete mathematics problems are among the most mathematically complex . One of them is the factorization problem, which consists in finding the decomposition of a number into prime factors. To solve it, it is necessary to find prime numbers, which leads to the problem of simplicity. The task of the simplicity test belongs to the complexity class P , that is, the operation time of the algorithms for solving it depends on the size of the input data polynomially, which was proved in 2002 . There is a similar, but unproven, statement for the factorization problem.
Some factorization-based mathematics applications require a series of very large primes chosen at random. The algorithm for obtaining them, based on the postulate of Bertrand :
Algorithm:
|
The time to solve the problem by this algorithm is not defined, but there is a high probability that it is always polynomial, as long as there are enough primes and they are distributed more or less evenly . For prime random numbers, these conditions are satisfied.
It is known ( Euclidean theorem ) that the set of primes is infinite. Dirichlet's theorem ( 1837 ) states that if GCD , then there are infinitely many primes comparable to modulo . In other words, primes are evenly distributed in residue classes over in accordance with the Euler function [1] at any value . However, if the primes are evenly distributed, but there are very few, the search may not be possible in practice. To solve this second problem, we use the theorem on the distribution of primes ( 1896 ), according to which the number of primes in the interval grows with increasing as . This number tends to infinity rather quickly, from which we can conclude that even at large values there is a fairly high probability ( ) finding a prime number at random. From this we can conclude that the algorithm described above can give an answer in polynomial time if there is a polynomial algorithm that allows us to verify the simplicity of an arbitrarily large number , which leads to the problem of simplicity.
Historical Information
The earliest mentions of primes are known from Euclid ( 3rd century BC ). Moreover, the first algorithm for finding prime numbers was invented by Eratosthenes ( 2nd century BC ) and is now known as the sieve of Eratosthenes . Its essence is the sequential exclusion from the list of integers from before multiples and other primes already found "sieve" [2] . Much later, the Arab mathematician Ibn Al-Bann proposed to search for integers, not until , and to , which allowed to reduce the number of operations. The disadvantage of this method is that instead of checking a given number for simplicity, it offers sequential enumeration [3] of all integers up to , and therefore is inefficient and requires significant computing power .
At the beginning of the 13th century, Leonardo of Pisa , known as Fibonacci, proposed a sequence of numbers (named after him), one of the properties of which is that Fibonacci number can be simple only for simple , with the exception of . This property can be used when checking Fibonacci numbers for simplicity. He also, regardless of Ibn al-Bann, proposed a method for checking numbers for simplicity by brute force. This algorithm is true (or improbable), because the answer is always, however, extremely inefficient.
The first to use the relationship between numbers to determine simplicity was Pietro Antonio Cataldi in his work on perfect numbers. Perfect numbers are those that are equal to the sum of their own divisors. The first seven perfect numbers: 6, 28, 496, 8128, 33550336, 8589869056 and 137438691328. Cataldi established that if the number - prime and number Is also prime then the number - perfect.
In the XVII century, the French mathematician Maren Mersenne was engaged in the study of numbers of the form [4] later named in his honor by the Mersenne numbers . Mersenne found that of the first 257 Mersenne numbers, only 11 are prime (for n = 2, 3, 5, 7, 13, 17, 19, 31, 67, 127, and 257). At the same time, he made several mistakes ( is not simple at p = 67 or 257, and is at p = 61, 89 and 107). Searching for primes among Mersenne numbers is quite simple thanks to the Luc-Lemer test , which allows you to find a solution relatively quickly. That is why Mersenne numbers are the largest among the currently known primes. In the correspondence of Mersenne and Fermat, several more ideas were expressed regarding prime numbers [4] .
So, Fermat discovered that if an integer not entirely divisible by a prime then the number always divided by ( Fermat's Little Theorem ). The theorem was later generalized by Euler. Fermat’s small theorem is based on several simplicity tests. Fermat also suggested that primes are numbers of the form with all natural . This is true with . A counterexample to this statement was found by Euler— . To test Fermat numbers for simplicity, there is an effective Pepin test . To date, no new Fermat prime numbers have been found.
Among other scientists, Euler , Legendre , and Gauss dealt with simplicity of numbers. Significant results in solving the simplicity problem were obtained by the French mathematician Eduard Luke in his works on the numbers of Fermat and Mersenne. It is his criterion for the simplicity of Mersenne numbers that is now known as the Luc-Lemer test.
True Simplicity Tests
Existing algorithms for checking numbers for simplicity can be divided into two categories: true simplicity tests and probabilistic simplicity tests. True tests as a result of calculations always give the fact of simplicity or complexity of a number, a probabilistic test gives an answer about the complexity of a number or its non-composition with a certain probability [2] [4] . To put it simply, the probabilistic algorithm says that the number is most likely not composite, but in the end it can turn out to be both simple and composite. Numbers satisfying the probabilistic simplicity test, but being compound, are called pseudo-simple [1] . One example of such numbers are Carmichael numbers [3] . We can also name Euler-Jacobi numbers for the Solovey-Strassen test and pseudo-simple Luc numbers.
One example of true simplicity tests is the Luc-Lemer test for Mersenne numbers . The obvious drawback of this test is its applicability only to a number of numbers of a certain kind. Other examples include Fermat’s small theorem.
- Pepin test for Fermat numbers
- Proth's theorem for Proth 's numbers
- Agrawal – Kayala – Saxen test , the first polynomial simplicity test.
- Luke - Lehmer - Riesel Test
And:
- divisor search method
- Wilson's theorem
- Poclington criterion
- Miller test
- Adleman-Pomeranza-Rumeli test perfected by 5 and Lenstroy
- Simplicity test using elliptic curves .
Probabilistic simplicity tests
This category includes:
- Farm Test
- Miller Test - Rabin
- Solovey Test - Strassen
- Bailey - Orange - Selfridge - Wogstaff test
- Frobenius Quadratic Test
Cryptography Simplicity Tests
Currently, primes are widely used in the field of information security. First of all, this is caused by the invention of the public key encryption method, which is used in encrypting information and in electronic digital signature algorithms. Currently, by standards, the size of primes used in the formation of a digital signature using elliptic curves is at least 254 bits in accordance with GOST R 34.10-2012. For such large numbers, the question of determining the simplicity of a number is extremely difficult. Simple methods, such as the brute force method, are unsuitable for use due to the fact that they require an extremely large amount of computational resources and a long time [6] .
Also, the determination of the simplicity of a number is necessary when breaking information encrypted or signed using the RSA algorithm . To open such a message, you must be able to decompose the number into two simple factors, which is a nontrivial task for large numbers.
On the other hand, when generating keys for public-key cryptosystems , electronic signature schemes, etc., large pseudorandom primes are used. For example, when using the Diffie-Hellman protocol, you must have a prime that specifies the final field. Therefore, the use of an effective simplicity test can improve the reliability of algorithms for generating such keys.
See also
- Cryptography
- Sieve of Eratosthenes
- Sundarama Sieve
Notes
- ↑ 1 2 Kormen T., Leiser C. Algorithms. Construction and analysis. - M .: MCCNMO, 2002 .-- S. 765-772.
- ↑ 1 2 Vasilenko O. N. Number-theoretic algorithms in cryptography. - M .: ICMMO, 2003 .-- 328 p.
- ↑ 1 2 Crandall R., Pomerance C. Prime Numbers: A Computational Perspective. - Springer, 2005.
- ↑ 1 2 3 Donald Knut . The Art of Programming, Volume 2. Derived Algorithms. - M .: "Williams" , 2007.
- ↑ Nesterenko Yu. V. Introduction to cryptography. - Peter, 2001 .-- 288 p.
- ↑ B. Schneier. Applied cryptography. - S. 296-300.
Literature
- Vasilenko O. N. Chapter 1. Testing numbers for simplicity and constructing large primes // Number -theoretic algorithms in cryptography . - M .: MCLMO, 2003 .-- S. 12-56. - 328 p. - ISBN 5-94057-103-4 .
- Nesterenko Yu. V. Chapter 4.6. How to check a large number for simplicity // Introduction to Cryptography / Ed. V.V. Yashchenko. - Peter, 2001 .-- 288 p. - ISBN 5-318-00443-1 .
- Schneier B. Part 3. Cryptographic algorithms. Chapter 11. Mathematical foundations. 11.5. Prime number generation // Applied cryptography. Protocols, algorithms, C source code = Applied Cryptography. Protocols, Algorithms and Source Code in C. - M .: Triumph, 2002. - S. 296-300. - 816 s. - 3000 copies. - ISBN 5-89392-055-4 .
- Kormen T., Leiser C. Chapter 33.8. Checking numbers for simplicity // Algorithms. Construction and analysis . - M .: MCCNMO, 2002 .-- S. 765-772. - ISBN 5-900916-37-5 .
- Crandall R., Pomerance C. Chapter 3. Recognizing Primes and Composites. Chapter 4. “Primality Proving” // Prime Numbers: A Computational Perspective . - Springer, 2005 .-- S. 117-224. - ISBN 0-387-25282-7 .
- Donald Knut . Chapter 4.5.4. Factorization // The Art of Programming, Volume 2. Derived Algorithms = The Art of Computer Programming, vol. 2. Seminumerical Algorithms. - 3rd ed. - M .: "Williams" , 2007. - S. 832. - ISBN 0-201-89684-2 .