Modern factoring techniques, broadly speaking, involve finding lots of relationships between appropriate numbers, and then finding a combination of these relationships that produce the desired result. There are heuristics to suggest how many relationships will be needed to get an appropriate combination, and progress can be "measured" by seeing how close you are to that number.
More specifically ...
Most factoring algorithms are a variant of:
+ Find lots of numbers that are squares modulo N (these are called quadratic residues)
+ Factor those numbers completely
+ Find a subset of those factored numbers such that ...
+ taken together, all the powers of primes are even.
The result is then two squares that are congurent modulo N, and using the difference of two squares formula, you ahve a non-trivial chance of a factorisation. The relationships I first mentioned are the quadratic residues, the combination is the subset.
There are various techniques for finding small quadratic residues that are (relatively speaking) easy to factor. These include the quadratic sieve, the multiple polynomial quadratic sieve, the continued fraction method, and the number field sieve (special and general).
The combinations are found by finding linearly dependent rows of a sodding enormous matrix modulo 2. This can be done using elementary linear algebra, but you need humungous amounts of memory.
The combinations are found by finding linearly dependent rows of a sodding enormous matrix modulo 2. This can be done using elementary linear algebra, but you need humungous amounts of memory.
Err... no, not really. In addition to taking O(n^2) storage, a dense solve will take O(n^3) time, which is quite infeasible for typical matrix sizes.
Instead, we use a sparse solve, such as the block Lanczos algorithm, which takes O(nw) storage and O(n^2w) time, where w is the average row weight.
I thought the comment was already too long, and was trying to give an outline and not dive into too many details. I've already omitted some of the methods to assist with the factoring of small quadratic residues, for example, and this was another part where I thought I'd give the naive idea rather than too many explicit details.
But you're absolutely right, and thank you for providing some of said details. It's nice to know that things I write are going to be checked by people who actually know more than I do.
If you had just stopped at "finding lineearly dependent rows of a sodding enormous matrix modulo 2" I wouldn't have said anything -- but mentioning elementary linear algebra was potentially misleading, so I figured it was worth adding the details there. :-)
For what it's worth, there is actually some elementary linear algebra done before the heavy guns are brought out: If a prime only occurs in two or three relations, adding one of those relations to the other(s) will decrease n while increasing w by a small enough factor as to make the final Lanczos solve faster. (In rare cases it can be worth eliminating primes which occur in 4 relations or more, but usually not.)