Regulation in the way OpenAI appears to campaign for in this response is, in my eyes, going to have a very similar trajectory to the consistent and misguided attempts at enforcing backdoors in or a complete ban of modern encryption. Attempts will be made, by agencies and companies lobbying for legislation, alleged solutions will inevitably fail, and some misguided politician is always going to be more than happy to campaign on this topic.
Large Language Models, similar to modern cryptography back in the 90s, are already running locally and anyone intending to regulate them in any way is going to find it difficult to add restrictions to something of that nature.
You are absolutely right for the moment, but the question is whether this will always remain the case. I am honestly doubtful and fairly certain this will be more a question of when than if. Seeing as both Nvidia and Apple continue making consumer hardware that does well in inferencing, I fail to see why training sufficiently capable models locally may not become more common place, especially as there are already murmurs in the field that overly large models (100B+) may not be necessary for many use cases.
Add to that the fact that institutions (including research institutes[0][1] or universities outside US or EU jurisdictions) are likely to continue undertaking the task of training larger in size models, making the fruits of their labor freely accessible, I just do not see a way for centralized enforcement of specific LLM training rules.
Yeah, we will find ways of training these things faster, and we will have faster hardware. We will have better, more parameter efficient models. This will inevitably be something we can do locally. You can already trade time for money. Like, right now you can buy hardware for less than what a house costs, train using the hardware for a year and you basically have something as capable as GPT3.5 - this is already possible and needs to be done only once (and you can still sell the hardware if you want) and a single dedicated person is capable of doing it by themselves. It will only get faster, and cheaper from here. Same goes for inference, still expensive to run but will get cheaper.
Now they are being explicit about their intentions
> AI developers could be required to receive a license to create highly
capable foundation models which are likely to prove more capable than
models previously shown to be safe
I would like to see their faces when the Fcc deems that gpt4 is Not safe. All we have to do is show that it can act like a Trumpist
Sam loves human extinction fear because it will increase his shot of licensing. When Sam says "down" "up" "left" and "right", he is actually saying "down" "up" "right" and "left". Just takes a little translation, but he comes across loud and clear.
"OpenAI desperately wants for nobody else to be able to come along and play in their sandbox so instead of facing competition head-on and playing fair, they are going to employ "regulatory capture" and try and enjoin government to grant themselves (and a few other incumbent players) an unfair advantage over newcomers."
Large Language Models, similar to modern cryptography back in the 90s, are already running locally and anyone intending to regulate them in any way is going to find it difficult to add restrictions to something of that nature.