Well, actually working with "averages" as baselines before you start experimenting with more complex ML models is a good habit.
Sure, they are dummy regressors [1], but they can be so useful for proving that your whatever ML model you choose is at least better than a dummy baseline. If your model can't beat it, then you need to develop a better one.
They can even be used as a place-holder model so you can develop your whole architecture surrounding it, while another teammate is iterating over more complex experiments.
You could also settle in for a moving average process as a first model in a time-series [2], because they are easy to implement and simple to reason about.
Sure, they are dummy regressors [1], but they can be so useful for proving that your whatever ML model you choose is at least better than a dummy baseline. If your model can't beat it, then you need to develop a better one.
They can even be used as a place-holder model so you can develop your whole architecture surrounding it, while another teammate is iterating over more complex experiments.
You could also settle in for a moving average process as a first model in a time-series [2], because they are easy to implement and simple to reason about.
Never under-estimate the power of an "average".
[1] https://scikit-learn.org/stable/modules/generated/sklearn.du... [2] https://en.wikipedia.org/wiki/Moving-average_model