Based on the feedback, we could have done a much better job with these results (lessons for our next experiment). But yes, the models were tested against the same dataset which was aggregated over different granularities (1 minute, 1 hour, 1 day)
At the moment our focus is on observability, hence the narrow scope of our dataset. A pretty good benchmark for observability seems to be Datadog's BOOM- https://huggingface.co/datasets/Datadog/BOOM
But for general purpose time-series forecasting, benchmarks mentioned in other comments like GIFT or M4 might come in handy. We might include them in the follow-up experiment.
Author here, we’re just getting started with these experiments and plan to apply them to more features on our roadmap. Future posts will be more detailed, based on the feedback we received here. Once we finish implementing these features, we’ll be happy to share the code and dataset.
This looks like a great benchmark! We've been thinking of doing a better and more detailed follow-up and this seems like the perfect dataset to do that with. Thanks!
Good to see positive reception to feedback! Sorry if my message came out as condescending, was not the intent. I recommend reading this piece on metrics https://openforecast.org/wp-content/uploads/2024/07/Svetunko.... It's easy to grasp, yet it contains great tips.
we're grateful for the honest feedback (and the awesome resource!), makes it easier to identify areas for improvement. Also, your point about using multiple metrics (based on use-cases, audience, etc) makes a lot of sense. Will incorporate this in our next experiment.