Haven't heard of it, and I couldn't find the phrase "piecewise-linear interpolation search" online at all. Care to share? In particular, how are the pieces determined?
The Google paper is "The Case for Learned Index Structures" I think? This paper discusses index structures to be stored along with the data, so it's not directly comparable to the OP, which starts with the sorted data alone. Google's use case is so dominated by lookups that as far as I can tell they don't even report on the time to construct the index. I'm not sure the same priorities hold in every domain.
Piecewise linear was one of the first things that I experimented with, but it is getting near to index based approaches, which is a different topic. It also is inefficient for non polynomial distributions. On things that approach linear, the extra steps to find the right spline can overweight the advantage of reduced error, so I think both search optimizations and building indices can coexist, shifting around where search should be the base case for an indexed search.
Building and maintaining indices is also not free, so it's interesting to try to do your best with much less index maintenance.
This paper seems to imply that previous approaches didn't use piecewise-linear approximations??
Can someone clear up if I got my history wrong? Did I just assume piecewise-linear is well-known when it isn't?