I think this is the most positive thing that could happen. What's more likely is that the whole response stream is manipulated to sell us things (ideas, products).
All of the future billion dollar model training runs might be for conversion rate optimization.
User mentions they didn't sleep well. Model delivers jarring information right before user's bed time. Model subtly suggests other sleep disruptive activities, user receives coupons for free coffee. User converts for ad for sleeping medication.
(This is already happening, intentionally or not)
Notably the open source models OpenAI released right before gpt5 are likely good enough to be substitutes for 95% of typical ChatGPT use cases.