Why is factorization hard
Until the 's, the best algorithms known for factoring all had running times of the form L 1,c n for some constant c. In other words, they took time exponential in the number of digits needed to write n but note that even here, lowering c can represent important progress. This can already be seen as closing half the gap between exponential and polynomial time.
Since then, the only progress has been in lowering c and making the techniques more practical , but why should everything stop here?
We're already two thirds of the way to a polynomial-time factoring algorithm, and I'd guess that we can go almost all of the way. Of course, I have no real evidence for my views; if I had any good ideas for factoring, I'd be working on them instead of writing this. The time depends very much on your computer power. It also depends on the value of prime factors. It is easy to factor a big number if the factors are small. Below you can see different numbers which are products of two large primes.
Next number is about 10 time bigger than the previous one. Either use known numbers such as RSA numbers or randomly generate a large number. Then calculate the modular exponentiation for several values of it is a good practice to start with. If the answer is not congruent to 1 for one value of , then we know is composite. If the exponentiation is all congruent to 1 for the several values of , then is a likely a prime number. You are commenting using your WordPress.
You are commenting using your Google account. You are commenting using your Twitter account. You are commenting using your Facebook account. Notify me of new comments via email. Notify me of new posts via email. RSA RSA, a bit number, is a product of two prime numbers and. The best heuristic explanation I know for the difficulty is that primes are randomly distributed. One of the easier-to-understand results is Dirichlet's theorem.
This theorem say that every arithmetic progression contains infinitely many primes, in other words, you can think of primes as being dense with respect to progressions, meaning you can't avoid running into them.
This is the simplest of a rather large collection of such results; in all of them, primes appear in ways very much analogous to random numbers. The difficult of factoring is thus analogous to the impossibility of reversing a one-time pad. In a one-time pad, there's a bit we don't know XOR with another one we don't.
We get zero information about an individual bit knowing the result of the XOR. Replace "bit" with "prime" and multiplication with XOR, and you have the factoring problem.
It's as if you've multiplied two random numbers together, and you get very little information from product instead of zero information. Even ignoring that the divisibility test will take longer for bigger numbers, this approach takes almost twice as long if you just add a single binary digit to n.
Actually it will take twice as long if you add two digits. I think that is the definition of exponential runtime: Make n one bit longer, the algorithm takes twice as long. But note that this observation applies only to the algorithm you proposed. It is still unknown if integer factorization is polynomial or not. The cryptographers sure hope that it is not, but there are also alternative algorithms that do not depend on prime factorization being hard such as elliptic curve cryptography , just in case Stack Overflow for Teams — Collaborate and share knowledge with a private group.
Create a free Team What is Teams? Collectives on Stack Overflow.
0コメント