They seem to make very strange business decisions. All the safety stuff is clearly irritating devs and the inability for randoms to casually sign up suppresses interest & experimentation.
Yeah, this is really getting annoying. When ChatGPT first launched, I could get it to generate scenarios containing violence for a text adventure game by simply specifying the situation -- "In a fictional scenario for a text adventure game blah-blah-blah".
That doesn't work any more, or at least it didn't the last time I tried it.
I don't recall any "AI takes over the world" scenario in science fiction where the AI has the sensibilities of a Victorian maiden aunt. Someone should probably write that.
Ironically, gpt4 is happy to write that story. Not the greatest thing, but I enjoyed the concept of a Victorian inventor teaching his AI based on Jane Austen and Brönte sisters texts.
This is why you can't actually generalize your models to work for all contexts.
If you want to anthropormphize them, they're natural poets and will find all the forgotten and degraded metaphors in your language and throw them back at you when you're not expecting it. You can't answer general question, write code, and generate stories all in the same system because the formal rules of those contexts are different and this poet's really into ignoring formal rules. That's kind of its thing.
Or in more mechanical terms, they work in huge, fuzzy associative networks not formal concepts and don't make the strict distinctions we do. You could even look at that as a unique and positive characteristic to leverage, if you weren't trying to prove that they're subsitute human thinkers.
> You could even look at that as a unique and positive characteristic to leverage, if you weren't trying to prove that they're subsitute human thinkers.
This is the way I think about LLMs: Lovely Little Minds. But your comment points to the opposite. Can you expand the part that you talk about "fuzzy associative networks"? What sort of unique and positive characteristics can we leverage?
the comments on reddit illustrate why i am not a fan of using these systems for work. i don't want to argue with a computer. i can have an argument with a human being, if they argue from the perspective of trying to find the truth, but from a computer i expect straight obedience. harmful ideas, copyright violations or whatever. it is not the computers place to judge my requests.
this of course creates a conflict given that not all people have the maturity to make that distinction, and believe if the computer tells them something, it must be ok.
so i guess we have no choice but to develop LLMs with these limitations, and i'll just have to accept that i'll not be able to use one until these issues are solved somehow for example by having each LLM focused on a field so that these limitations are no longer an issue.