This assumes the portion of the enterprise fee related to this feature is only large enough to cover the cost of losing potential training data, which is an absurd assumption that can't be proven and has no basis in economic theory.
Companies are trying to maximize profit; they are not trying to minimize costs so they can continue to do you favors.
These arguments creep up frequently on HN: "This company is doing X to their customers to offset their costs." No, they are a company, and they are trying to make money.
The fact that companies want to maximise profits doesn't prove the point you think it does.
Nobody is arguing that there's an exact matching of value to the company between 1 user giving OpenAI permission to use their chat history for future training and 1 user paying $20/month. But based on your simplistic view, no company would ever offer a free tier because it's not directly maximising revenue.
It's very obvious that getting lots of real-world examples of users using ChatGPT is beneficial for multiple reasons - from using in future training runs (or fine tuning), to analysing what users what to use LLMs for, to analysing what areas ChatGPT is currently performing well or badly in, etc.
So it's not about blankly and entirely "offsetting costs", it's about the fact that both money into their bank account and this sort of data into their databases are both beneficial to the long-term profitability of the company even though only one of them is direct and instant revenue.
Before ChatGPT was released for the world to use, OpenAI were even paying people (both employees and not) to have lots of conversations with it for them to analyse. The exact same logic that justified that justifies allowing some users to pay some or all of the fee for the service in data permissions rather than money.
I'm speaking from experience making these sorts of business decisions, and to a company like OpenAI this is just basic common sense.