Suppose you wanted to find a root of f(x) = x^3 - a mod N. If there is a root smaller than N^(1/3), this reduces to simply computing the integer cube root of a, since there is no wrap-around.
But imagine if the problem is slightly changed to f(x) = (x + padding)^3 - a mod N, for some large but known padding. Now you can't simply compute an integer cube root, and the above doesn't work anymore. But you can look for a product of f(x) by some other polynomial g(x): f(x)g(x) necessarily contains the roots of f(x), but may have smaller coefficients such that evaluating f(x)g(x) at the root you're looking for does not wrap around N.
Now look into what this multiplication looks like: f(x)g(x) = c_0 f(x) + c_1 x f(x) + c_2 x^2 f(x) + ..., where c_i are the unknown coefficients of g(x). This is a sum of integer multiples of known coefficient vectors and, to find the set of coefficients c_i such that this sum is a small as possible is exactly the same as finding a short vector in the lattice (f(x), xf(x), x^2f(x), ...), where the vectors are the coefficients of each (shifted) polynomial. You also have to weight the coefficients according to their exponent, to account for the fact that the coefficient of, say, x^3 needs to be smaller than x to avoid wrap-around.
Once lattice reduction gives you back an f(x)g(x) with small enough coefficients you compute its integer roots, which is doable very quickly, and check which of them work for f(x). That's the gist of it. The full-fledged method also includes powers of f(x) modulo powers of N instead of simply multiplying it by g(x), and that was the trick Coppersmith introduced, but if you understand the simple version the full version is straightforward.
But imagine if the problem is slightly changed to f(x) = (x + padding)^3 - a mod N, for some large but known padding. Now you can't simply compute an integer cube root, and the above doesn't work anymore. But you can look for a product of f(x) by some other polynomial g(x): f(x)g(x) necessarily contains the roots of f(x), but may have smaller coefficients such that evaluating f(x)g(x) at the root you're looking for does not wrap around N.
Now look into what this multiplication looks like: f(x)g(x) = c_0 f(x) + c_1 x f(x) + c_2 x^2 f(x) + ..., where c_i are the unknown coefficients of g(x). This is a sum of integer multiples of known coefficient vectors and, to find the set of coefficients c_i such that this sum is a small as possible is exactly the same as finding a short vector in the lattice (f(x), xf(x), x^2f(x), ...), where the vectors are the coefficients of each (shifted) polynomial. You also have to weight the coefficients according to their exponent, to account for the fact that the coefficient of, say, x^3 needs to be smaller than x to avoid wrap-around.
Once lattice reduction gives you back an f(x)g(x) with small enough coefficients you compute its integer roots, which is doable very quickly, and check which of them work for f(x). That's the gist of it. The full-fledged method also includes powers of f(x) modulo powers of N instead of simply multiplying it by g(x), and that was the trick Coppersmith introduced, but if you understand the simple version the full version is straightforward.