> The prevalence of such hallucinations has led experts, advocates and former OpenAI employees to call for the federal government to consider AI regulations. At minimum, they said, OpenAI needs to address the flaw.
I don't see why there should be regulations against models that produce hallucinations. If a model has a high error rate, then it should not be used for critical tasks. Any user of the model should know that it is not error-free.
I don't see why there should be regulations against models that produce hallucinations. If a model has a high error rate, then it should not be used for critical tasks. Any user of the model should know that it is not error-free.