Maybe it's an instance of Parkinson's law [1]: if it all fits in GPU memory, just put it all in and plot it. This is much simpler to implement than any out-of-memory technique. It's also easier for the user—`scatter(x, y)` would work effortlessly with, say, 10 million points.
But with 10 billion points, you need to consider more sophisticated approaches.
But with 10 billion points, you need to consider more sophisticated approaches.
[1] https://en.wikipedia.org/wiki/Parkinson%27s_law