I think the George Box aphorism linked at the bottom ("All models are wrong [but some are useful]") is closer to the right way to think about this.
With complexity comes additional explanatory power. Something with perfect explanatory power has infinite complexity. But the tradeoff is not linear in most problem domains; we can first add those concepts to our model that maximize the explanatory power relative to the complexity they introduce.
And much of the world, physical and social, can be explained in fairly simple models, which is excellent. For things that are less well captured with simple models, or for which precision is so important we wish to shrink the error term further, then great, pile on more complexity progressively.
With complexity comes additional explanatory power. Something with perfect explanatory power has infinite complexity. But the tradeoff is not linear in most problem domains; we can first add those concepts to our model that maximize the explanatory power relative to the complexity they introduce.
And much of the world, physical and social, can be explained in fairly simple models, which is excellent. For things that are less well captured with simple models, or for which precision is so important we wish to shrink the error term further, then great, pile on more complexity progressively.