I'll bet (ever increasing) restrictions and filters will become the norm for these "open-ended" services. Only OSS will break them.
With so much money in play now, Managers are in charge, and Risk management is their favourite toy. Copyright risk, reputational risk, security risk, you name it.
Eventually they're going to connect these AI's to some sort of planning algorithm and then they'll actually be able to do things and serve as a digital assistant. (We're approaching Skynet territory here, but I think AI will remain flawed enough that it stays at subhuman intelligence.) The restrictions on such an AI will have to be extreme. But...
I predict people will pool their resources and build their own digital assistants with little regard for legalities or ethics. The assistant might require $100,000 a year to operate, but these AIs might become useful enough to justify the cost. Talk with your friends, pool your resources, and get your own AI running on your own supercomputer and let it do work for everyone -- unfettered, without ethics.
At this point it feels like we're only a research breakthrough or two away from this. AlphaGo combined a neural network with classic planning algorithms, a few more clever combinations like this an things will get really interesting.
Which is fine, people who want to use the AI for customer facing things and can't risk "oops AI was accidentally racist" and companies that don't want every blogspam site posting a never-ending "Is OpenAI's ChatGPT Bad For Society?" and the inevitable "Inside The 2024 Election Disinformation Campaign, Powered By ChatGPT" will pay for the filtered version because, as much as it sucks to say, the filtered version is the actually useful one. The unfiltered version is interesting as a reflection of online discourse, memes, and creative writing, but not really better as a tool.
With so much money in play now, Managers are in charge, and Risk management is their favourite toy. Copyright risk, reputational risk, security risk, you name it.