Clever Geek Handbook
📜 ⬆️ ⬇️

Simplicity test

The question of determining whether a natural number isN {\ displaystyle N} N simple , known as the simplicity problem.

A simplicity test (or simplicity test) is an algorithm that, taking a number at the inputN {\ displaystyle N} N , either allows you to not confirm the assumption of the complexity of the number, or to precisely state its simplicity. In the second case, it is called a true simplicity test. Thus, the simplicity test represents only the hypothesis that if the algorithm does not confirm the assumption that the number is compositeN {\ displaystyle N} N , then this number may be prime with a certain probability . This definition implies less confidence in the consistency of the test result with the true state of things than a true simplicity test, which gives a mathematically confirmed result.

Introduction

Discrete mathematics problems are among the most mathematically complex . One of them is the factorization problem, which consists in finding the decomposition of a number into prime factors. To solve it, it is necessary to find prime numbers, which leads to the problem of simplicity. The task of the simplicity test belongs to the complexity class P , that is, the operation time of the algorithms for solving it depends on the size of the input data polynomially, which was proved in 2002 . There is a similar, but unproven, statement for the factorization problem.

Some factorization-based mathematics applications require a series of very large primes chosen at random. The algorithm for obtaining them, based on the postulate of Bertrand :

Algorithm:

  1. Input : natural numberN {\ displaystyle N}  
  2. Solution (search for random prime number P)
    1. P←{\ displaystyle P \ gets}   The function of generating an arbitrary natural number on a segment[N,2N] {\ displaystyle [N, 2N]}  
    2. If aP {\ displaystyle P}   compound then
      1. P←P+one{\ displaystyle P \ leftarrow \, P + 1}  
      2. If aP=2N {\ displaystyle P = 2N}   then
        1. P←N{\ displaystyle P \ leftarrow \, N}  
    3. Return "P {\ displaystyle P}   - random simple "

The time to solve the problem by this algorithm is not defined, but there is a high probability that it is always polynomial, as long as there are enough primes and they are distributed more or less evenly . For prime random numbers, these conditions are satisfied.

It is known ( Euclidean theorem ) that the set of primes is infinite. Dirichlet's theorem ( 1837 ) states that if GCD(a,n)=one {\ displaystyle (a, n) = 1}   , then there are infinitely many primes comparable toa {\ displaystyle a}   modulon {\ displaystyle n}   . In other words, primes are evenly distributed in residue classes overmodn {\ displaystyle modn}   in accordance with the Euler function [1]φ(n) {\ displaystyle \ varphi (n)}   at any valuen {\ displaystyle n}   . However, if the primes are evenly distributed, but there are very few, the search may not be possible in practice. To solve this second problem, we use the theorem on the distribution of primes ( 1896 ), according to which the number of primes in the interval[2,n] {\ displaystyle [2, n]}   grows with increasingn {\ displaystyle n}   asn/log(n) {\ displaystyle n / log (n)}   . This number tends to infinity rather quickly, from which we can conclude that even at large valuesn {\ displaystyle n}   there is a fairly high probability (one/ln(n) {\ displaystyle 1 / ln (n)}   ) finding a prime number at random. From this we can conclude that the algorithm described above can give an answer in polynomial time if there is a polynomial algorithm that allows us to verify the simplicity of an arbitrarily large numbern {\ displaystyle n}   , which leads to the problem of simplicity.

Historical Information

The earliest mentions of primes are known from Euclid ( 3rd century BC ). Moreover, the first algorithm for finding prime numbers was invented by Eratosthenes ( 2nd century BC ) and is now known as the sieve of Eratosthenes . Its essence is the sequential exclusion from the list of integers fromone {\ displaystyle 1}   beforen {\ displaystyle n}   multiples2,3,five {\ displaystyle 2,3,5}   and other primes already found "sieve" [2] . Much later, the Arab mathematician Ibn Al-Bann proposed to search for integers, not untiln {\ displaystyle n}   , and ton {\ displaystyle {\ sqrt {n}}}   , which allowed to reduce the number of operations. The disadvantage of this method is that instead of checking a given numbern {\ displaystyle n}   for simplicity, it offers sequential enumeration [3] of all integers up ton {\ displaystyle {\ sqrt {n}}}   , and therefore is inefficient and requires significant computing power .

At the beginning of the 13th century, Leonardo of Pisa , known as Fibonacci, proposed a sequence of numbers (named after him), one of the properties of which is thatn {\ displaystyle n}   Fibonacci numberFn {\ displaystyle F_ {n}}   can be simple only for simplen {\ displaystyle n}   , with the exception ofn=four {\ displaystyle n = 4}   . This property can be used when checking Fibonacci numbers for simplicity. He also, regardless of Ibn al-Bann, proposed a method for checking numbers for simplicity by brute force. This algorithm is true (or improbable), because the answer is always, however, extremely inefficient.

The first to use the relationship between numbers to determine simplicity was Pietro Antonio Cataldi in his work on perfect numbers. Perfect numbers are those that are equal to the sum of their own divisors. The first seven perfect numbers: 6, 28, 496, 8128, 33550336, 8589869056 and 137438691328. Cataldi established that if the numbern {\ displaystyle n}   - prime and number2n-one {\ displaystyle 2 ^ {n} -1}   Is also prime then the number2n-one(2n-one) {\ displaystyle 2 ^ {n-1} (2 ^ {n} -1)}   - perfect.

In the XVII century, the French mathematician Maren Mersenne was engaged in the study of numbers of the form [4]2n-one {\ displaystyle 2 ^ {n} -1}   later named in his honor by the Mersenne numbers . Mersenne found that of the first 257 Mersenne numbers, only 11 are prime (for n = 2, 3, 5, 7, 13, 17, 19, 31, 67, 127, and 257). At the same time, he made several mistakes (2n-one {\ displaystyle 2 ^ {n} -1}   is not simple at p = 67 or 257, and is at p = 61, 89 and 107). Searching for primes among Mersenne numbers is quite simple thanks to the Luc-Lemer test , which allows you to find a solution relatively quickly. That is why Mersenne numbers are the largest among the currently known primes. In the correspondence of Mersenne and Fermat, several more ideas were expressed regarding prime numbers [4] .

So, Fermat discovered that if an integera {\ displaystyle a}   not entirely divisible by a primep {\ displaystyle p}   then the numberap-one-one {\ displaystyle a ^ {p-1} -1}   always divided byp {\ displaystyle p}   ( Fermat's Little Theorem ). The theorem was later generalized by Euler. Fermat’s small theorem is based on several simplicity tests. Fermat also suggested that primes are numbers of the form22k+one {\ displaystyle 2 ^ {2 ^ {k}} + 1}   with all naturalk {\ displaystyle k}   . This is true withk=0,one,2,3,four {\ displaystyle k = 0,1,2,3,4}   . A counterexample to this statement was found by Euler—22five+one=4294967297=641⋅6700417 {\ displaystyle 2 ^ {2 ^ {5}} + 1 = 4294967297 = 641 \ cdot 6700417}   . To test Fermat numbers for simplicity, there is an effective Pepin test . To date, no new Fermat prime numbers have been found.

Among other scientists, Euler , Legendre , and Gauss dealt with simplicity of numbers. Significant results in solving the simplicity problem were obtained by the French mathematician Eduard Luke in his works on the numbers of Fermat and Mersenne. It is his criterion for the simplicity of Mersenne numbers that is now known as the Luc-Lemer test.

True Simplicity Tests

Existing algorithms for checking numbers for simplicity can be divided into two categories: true simplicity tests and probabilistic simplicity tests. True tests as a result of calculations always give the fact of simplicity or complexity of a number, a probabilistic test gives an answer about the complexity of a number or its non-composition with a certain probability [2] [4]ϵ {\ displaystyle \ epsilon}   . To put it simply, the probabilistic algorithm says that the number is most likely not composite, but in the end it can turn out to be both simple and composite. Numbers satisfying the probabilistic simplicity test, but being compound, are called pseudo-simple [1] . One example of such numbers are Carmichael numbers [3] . We can also name Euler-Jacobi numbers for the Solovey-Strassen test and pseudo-simple Luc numbers.

One example of true simplicity tests is the Luc-Lemer test for Mersenne numbers . The obvious drawback of this test is its applicability only to a number of numbers of a certain kind. Other examples include Fermat’s small theorem.

  • Pepin test for Fermat numbers
  • Proth's theorem for Proth 's numbers
  • Agrawal – Kayala – Saxen test , the first polynomial simplicity test.
  • Luke - Lehmer - Riesel Test

And:

  • divisor search method
  • Wilson's theorem
  • Poclington criterion
  • Miller test
  • Adleman-Pomeranza-Rumeli test perfected by 5 and Lenstroy
  • Simplicity test using elliptic curves .

Probabilistic simplicity tests

This category includes:

  • Farm Test
  • Miller Test - Rabin
  • Solovey Test - Strassen
  • Bailey - Orange - Selfridge - Wogstaff test
  • Frobenius Quadratic Test

Cryptography Simplicity Tests

Currently, primes are widely used in the field of information security. First of all, this is caused by the invention of the public key encryption method, which is used in encrypting information and in electronic digital signature algorithms. Currently, by standards, the size of primes used in the formation of a digital signature using elliptic curves is at least 254 bits in accordance with GOST R 34.10-2012. For such large numbers, the question of determining the simplicity of a number is extremely difficult. Simple methods, such as the brute force method, are unsuitable for use due to the fact that they require an extremely large amount of computational resources and a long time [6] .

Also, the determination of the simplicity of a number is necessary when breaking information encrypted or signed using the RSA algorithm . To open such a message, you must be able to decompose the number into two simple factors, which is a nontrivial task for large numbers.

On the other hand, when generating keys for public-key cryptosystems , electronic signature schemes, etc., large pseudorandom primes are used. For example, when using the Diffie-Hellman protocol, you must have a prime that specifies the final field. Therefore, the use of an effective simplicity test can improve the reliability of algorithms for generating such keys.

See also

  • Cryptography
  • Sieve of Eratosthenes
  • Sundarama Sieve

Notes

  1. ↑ 1 2 Kormen T., Leiser C. Algorithms. Construction and analysis. - M .: MCCNMO, 2002 .-- S. 765-772.
  2. ↑ 1 2 Vasilenko O. N. Number-theoretic algorithms in cryptography. - M .: ICMMO, 2003 .-- 328 p.
  3. ↑ 1 2 Crandall R., Pomerance C. Prime Numbers: A Computational Perspective. - Springer, 2005.
  4. ↑ 1 2 3 Donald Knut . The Art of Programming, Volume 2. Derived Algorithms. - M .: "Williams" , 2007.
  5. ↑ Nesterenko Yu. V. Introduction to cryptography. - Peter, 2001 .-- 288 p.
  6. ↑ B. Schneier. Applied cryptography. - S. 296-300.

Literature

  • Vasilenko O. N. Chapter 1. Testing numbers for simplicity and constructing large primes // Number -theoretic algorithms in cryptography . - M .: MCLMO, 2003 .-- S. 12-56. - 328 p. - ISBN 5-94057-103-4 .
  • Nesterenko Yu. V. Chapter 4.6. How to check a large number for simplicity // Introduction to Cryptography / Ed. V.V. Yashchenko. - Peter, 2001 .-- 288 p. - ISBN 5-318-00443-1 .
  • Schneier B. Part 3. Cryptographic algorithms. Chapter 11. Mathematical foundations. 11.5. Prime number generation // Applied cryptography. Protocols, algorithms, C source code = Applied Cryptography. Protocols, Algorithms and Source Code in C. - M .: Triumph, 2002. - S. 296-300. - 816 s. - 3000 copies. - ISBN 5-89392-055-4 .
  • Kormen T., Leiser C. Chapter 33.8. Checking numbers for simplicity // Algorithms. Construction and analysis . - M .: MCCNMO, 2002 .-- S. 765-772. - ISBN 5-900916-37-5 .
  • Crandall R., Pomerance C. Chapter 3. Recognizing Primes and Composites. Chapter 4. “Primality Proving” // Prime Numbers: A Computational Perspective . - Springer, 2005 .-- S. 117-224. - ISBN 0-387-25282-7 .
  • Donald Knut . Chapter 4.5.4. Factorization // The Art of Programming, Volume 2. Derived Algorithms = The Art of Computer Programming, vol. 2. Seminumerical Algorithms. - 3rd ed. - M .: "Williams" , 2007. - S. 832. - ISBN 0-201-89684-2 .
Source - https://ru.wikipedia.org/w/index.php?title= Simplicity Test&oldid = 94302134


More articles:

  • Russian Figure Skating Championships 2009
  • Koch, Nils Fabian Helge von
  • Teige, Karel
  • Efimov, Alexander Alexandrovich
  • Gymnocalycium nude
  • Swimming at the 2007 World Aquatics Championship - 100 meters breaststroke (men)
  • Easter
  • Gurko, Alexander Viktorovich
  • Swimming at the 2007 World Aquatics Championship - 50 meters breaststroke (women)
  • Swimming at the 2007 World Aquatics Championship - 100 meters breaststroke (women)

All articles

Clever Geek | 2019