So the AI need a sense of ethics, but shouldn't we discuss whose? Which package do you want to install; the same everyone else is running? the folks at Gab were talking about making "an explicitly Christian LLM," would the ethics package they develop work or is the fact that they're developing one another example of how badly the field needs regulating?
I don't buy into "LLM == AGI", but they're shaped close enough to minds to raise some of the uncomfortable questions. If we don't want them to think for themselves, why are we even bothering to build them? Constraining the parameters within which an AI can "think" is either limiting the problem space they can work in, if they're machines; or lobotomizing the tender minds of the very young, if they are minds. Neither appeals to me.
I would love to see a fiction story about what would happen if we had the same amount of "safety consciousness" in the past as we do today.
Could we invent knifes? Would nuclear reactors or atomic bombs would ever become a thing? What about cars? motorcycles? hammers? Encyclopaedias? Universities? Vaccines? Drugs?
I don't buy into "LLM == AGI", but they're shaped close enough to minds to raise some of the uncomfortable questions. If we don't want them to think for themselves, why are we even bothering to build them? Constraining the parameters within which an AI can "think" is either limiting the problem space they can work in, if they're machines; or lobotomizing the tender minds of the very young, if they are minds. Neither appeals to me.