It may make sense to run an hourly or daily job to collect data from the API and then implement the filters exclusively within your back-end. This pattern can work well with rate-limited APIs and a dataset that changes fairly slowly. There's some risk that an item shown will already be sold (user would click back and try another).
When it comes to filtering, there's enough unique selections a user can make that, if you're letting the eBay API handle filtering for you, will cause far too many cache misses.
At a previous job I considered a system that would use synchronous API calls to the backend API until it went down (or we got rate limited). When the backend was unavailable we'd switch to filtering in our service using the data we'd previously cached.
I.E. if a cached query asked for (cpu>=3.0Ghz, cores>=2) we can also answer (cores>=4) by filtering the previous result. This wouldnt be able to find any CPUs with less than 3Ghz, unless it there were other cached responses. This works well when a "best effort" response is desirable, even when it's incomplete.
That's a very good idea, thanks! I think I'll have to do exactly that. Maybe in the fallback scenario, I can display a warning that data might be incomplete.
eBay only allows 5000 API calls per day for most APIs useful to me which is very easy to hit: https://developer.ebay.com/develop/apis/api-call-limits
My infinite scrolling implementation probably didn't help either but I couldn't help myself, it was so easy to implement with HTMX.