It seems like it would train users to ask questions that it can actually answer. (They might also need some examples of what sort of questions to ask.)
Mostly it would train users to not use their service and go to a service where the model outputs results they can copy paste to complete their assignment.
So these companies cannot do this, they would hemorrhage too many users and companies cannot go against the profit incentives in practice.
Major industry players were doing that for a while now. It's just hard to actually design training regimes that give LLMs better hallucination-avoidance capabilities.
And it's easy to damage the hallucination-avoidance capabilities by training an LLM wrong. As OpenAI has demonstrated when they fried the o3 with RLVR that encouraged guesswork.
That "SAT test incentivizes guesswork" example they give in the article is one they had to learn for themselves the hard way.
I am curious what precisely this "legislative nonsense" is.
There seems to be some sort of consensus among legal teams of big American tech companies that the EU is sometimes not worth it for now since OpenAI are not the only one not offering some service in the EU (I'm thinking meta.ai).
Still I haven't been able to find information about what exactly prevents them from selling anything in the EU.