This is a classic question where some of the usual assumptions of the method conventionally applied, the Maximum Likelihood (ML) principle, break down. There's a good explanation at:
In a nut, the ML estimator of N, the number of tanks produced, is the max of the serial numbers. But this is biased, in particular, it tends to systematically underestimate N. (Because you're unlikely to actually observe the top serial number in your random sample.)
So you can add a correction term which is, intuitively, the expected gap between the serial numbers in the finite sample. The correction makes up for the fencepost effect. It goes to zero as the number of samples increases.
http://en.wikipedia.org/wiki/German_tank_problem#Example
In a nut, the ML estimator of N, the number of tanks produced, is the max of the serial numbers. But this is biased, in particular, it tends to systematically underestimate N. (Because you're unlikely to actually observe the top serial number in your random sample.)
So you can add a correction term which is, intuitively, the expected gap between the serial numbers in the finite sample. The correction makes up for the fencepost effect. It goes to zero as the number of samples increases.