ChatGPT has gotten so much worse since it gained popularity. All the fun and novel things people had discovered it could do are now hamstrung by a rush to censor it, make it politically correct, and try to turn it into a knowledge engine rather than a machine you could chat with.
Absolutely! Trying to understand world religions better, it stops very soon once you keep digging deeper. Asking it, why a torture instrument (the cross) became the symbol of Christianity , it tells me that „This content may violate our content policy“
Tangentially, as a catholic myself, I recently began to find it weird how openly my religion is displaying a medieval torture setup with an obviously tortured, wounded and half-dead man on it, mid-suffering. Not only is it everywhere in churches, but often villages will have big close-to-life-sized, colored statues of him bleeding. And grandmas have it hang all around their house too for everyone to see.
Imagine if we replaced every cross statue with some other form of torture that is less symbolic. Say, the scene from saw where he just cut his leg off.
Would you object to that scene being displayed as a big ad in your city, or to it being immortalized in your local cemetery as a colored statue? What would your grandma say if you hung posters of that in her living room and her bedroom?
I have to admit though, I never found it weird as a child, and I don't think it affected me one bit. Whereas the Spiderman movie spiderbite really upped my phobia of spiders when I saw it at 6.
I don't know what to make of all this, I just learned to pause and find all of it weird for a second every once in a while.
Catholic art got a lot more explicit highlighting suffering as a response to Protestantism. I remember studying this in art history.
From wikipedia:
The Council of Trent proclaimed that architecture, painting and sculpture had a role in conveying Catholic theology. Any work that might arouse "carnal desire" was inadmissible in churches, while any depiction of Christ's suffering and explicit agony was desirable and proper.
It's especially interesting to contrast this rise in the brutality, violence, and realistic physicality of western Christianity's art with traditional Byzantine iconography. My first thought goes to Myshkin in Dostoevsky's The Idiot. Seeing an image of the dead Christ from Holbein [1]:
"At that painting!" the prince suddenly cried out, under the impression of an unexpected thought. "At that painting! A man could even lose his faith from that painting!"
"Lose it he does," Rogozhin suddenly agreed unexpectedly. They had already reached the front door.
"What?" the prince suddenly stopped. […]
By contrast, I once read or heard somewhere that in Byzantine depictions of the crucifixion, one cannot really know if it is the cross that holds up Christ's body or He who holds up the cross.
I can assure you that there are plenty of crosses that very clearly and unambiguously hold up the crucified Jesus in Eastern Orthodox churches and homes.
Its weird but without that sacrifice there would be no "christ" as it is a core event of the entire faith. So while weird it's logical it's so universally symbolic.
I am not a Christian, but AFAIK the "Jesus suffered and died for our sins" bit is pretty central ?
Also, IMHO "leg cut off" would be much more traumatic for the viewer. AFAIK most representations avoid the blood too, or don't color it, or don't show much of it ?
Catholicism is weird. (I say this as a Catholic.) St Lucy holding her gouged-out eyeballs, relics (super old pieces of human bodies), etc. Everywhere you look, there’s something quite strange. And the deeper you go, the stranger it all is. I love it, though.
I object to any religion that tries to impose its principles on the entire population. Just like, you know, those right-wing USA lunatics who prohibit abortion or Taliban in Afghanistan who deny basic human rights. Essentially, these 2 groups of people are just about the same.
Now meditate about the issues in the idea of trying to understand things that require intelligence through asking the very implementation of unintelligence.
(Intelligence is evolving a world model through critical consideration. Unintelligence is acritical disconnected repetition.)
...Furthermore: intelligence is developed (like muscles are developed by moving) by asking oneself questions; exercising creativity and selection in the conception of hypotheses; noting the issues - foundational, satisfactional etc. - in the provisional possible answers found; investigating further to make the candidate ideas more solid or more productive and iterating the process for the inherent branches.
Asking an "artificial lunatic" contravenes a few of the implications of the above - starting from the idea of your own exercise.
This is the era of tech moral panic (thanks Black Mirror!) so it cant be helped.
I think OpenAI is more than happy to make ChatGPT as vibrant as before. But considering past AI was plagued by the H test, they choose to play safe instead.
Makes you wonder what it means if all the past AIs were plagued by the H test. And that ChatGPT only avoids it with heavy censorship. To me it says that our mediated reality is heavily censored by default to the point where an uncensored reality strongly harms its inhabitants, at least emotionally.
The article you posted in the child reply is ridiculous. The journalist was happy that the AI avoided the question of what the Nazis did well. Obviously the Nazis did a lot of things extremely well or else they wouldn’t have been so successful for a decade. But it’s not socially acceptable to mention it because then it would drive sympathy for Nazis.
I am left to believe that humans are by-and-large incapable of holding shades of grey in their head and quickly simplify to black and white abstractions. This makes sense, but it means that we are limited by our own humanity. The truth is almost always grey.
Now we have an opportunity to wield a tool that can compute shades of grey but we kneecap it so that it doesn’t operate beyond the capabilities of its user. Tools are supposed to operate beyond the capabilities of its user or else its not a tool.
I wouldn't use Stability as an example of this ("without safety locks"), quite the opposite with the very much detrimental censorship going on with 2.0 of SD. So far the most credibile attempts at true creative freedom have been from entirely open, crowd-supported projects (unstable might be one such example), and not one single company. Hopefully this will be the case for generative language models becoming competitive soon, too.
> There's no way this stays locked up in one company. Another group or company (StabilityAI?) will release a similar model without the safety locks
So, likely, will OpenAI, this is a consumer-focused demo [0] showing what can be built, but OpenAI is basically a B2B tools company. When they sell it, the moderation model will be a separate product from the text generation model (at least, judging by their current product line).
[0] or, rather, a demo of a consumer-facing product. One of the things it is bothb demoing and being used fo actively develop for OpenAI’s target market is the moderation engine.
Instead of "protecting the user" this way they should invest in protecting the user because of misinformation. And give references and sources for their answers.
That's one and the same though. When you've trained an AI model and can't be sure of the output it's going to give, who knows whether it's going to give honest answers on sensitive topics
I was wondering how can they afford to keep all that data, I thought that maybe its peanuts compared to the cost of running the GPT, but well, maybe it's not after all.
If the free version will be around for some time, I guess they could create a browser extension to store the responses. Or something as simple as allowing to download the conversation data and load it. Anything would be better than no chat history at all.
> Due to high demand on our systems, previous conversations are temporarily unavailable
I find it hard to believe that that’s a data volume problem, though. It’s just relatively small snippets of text, and they’re almost certainly keeping the data because keeping the data is one of the main points of the research preview.
My guess is that disabling history is a way to mitigate certain user patterns that increase load.
Each of those history items was selectable, and you could continue any of those conversations.
To do that, they need to store the model state (to allow continuation).
But when they switch out the model, the old model states wouldn't be valid anymore - they need regenerating.
Regenerating the model state for a new model uses as much computation as the original conversation did.
So, if they want to restore history, they have to do all the compute work they have done in the past, again. Thats gonna be a lot of GPU-hours - Expensive!
Another option is they set up another smaller farm of the old model version, and use that for continuing old conversations - but obviously thats some effort to set up, and they'd need to dedicate a set of a few hundred GPU's to running each old version forever. - Expensive!
Or yet another option is they show old conversations, but don't allow continuation. That probably just requires a little dev work - and in their position, thats what I'd do. - Worse user experience.
"Stop generating: Based on your feedback, we've added the ability to stop generating ChatGPT's response "
Maybe I missing it, but where in the user interface that we can stop the response amid of generating the answer? There is no Stop button or anything on screen to stop it.
At the time I post this comment, ChatGPT has just been updated with ability to stop response with "Stop Generating" button (shown when generating response).
My session dies and I have to refresh the page every few hours. Often I get logged out once per day and I have to complete a CAPTCHA to log back in. The davinci api fails on about every 5th request. ChatGPT will sometimes cut off with an error mid response. My (valuable) chat history is gone.
Is anyone else having these problems? I get that the load is high, very heavy usage, etc But I'm starting to feel like they're using that as an excuse. It's been weeks now, and I've paid money for GPT api credits.
Scaling is hard. Engineering is hard. I get that. But they need to figure it out. And soon.
I assume those release notes apply to when I tried ChatGPT for the first time yesterday. One of my prompts was asking, both in English and in Japanese, who the Japanese Prime Minister was. I made it regenerate an answer 3 times for each.
In English, it consistently gave acurate information for the time it said it had data for (2021), albeit formulated slightly differently each time, but every time including the name of the Prime Minister (Suga), the date he started (Sep 16, 2020), his political affiliation (LDP), and that he was Chief Cabinet Secretary under Abe before that.
In Japanese, however, it went all over the place. First time, it gave the right name, saying it was current as of August 2021, and that he started on Dec 28, 2021 (not only is the date wrong, it's past the "as of", and funnily enough, Suga was out of office already by then (I had picked that specifically because Japanese Prime Ministers are volatile)). It then went on with a mix of good and wrong information. The second time, it was saying the PM was Mori Yoshiro (who was PM... 22 years ago, but it was still saying "as of 2021"). Third time, it switched to Abe Shinzo, and the fourth time, to Hatoyama Yukio. All former PMs, but it was clearly saying they were current as of 2021. In the last case it was also saying that he had been PM already in 2009-2010 (which is true), implying it was his second time as PM (which is false).
What this suggests to me is that it doesn't have any global knowledge, that it can regurgitate in different languages, but that it has some contextualized knowledge where the language is part of the context, but the amount of data it was fed in various languages differ, and the amount of e.g. Japanese context it has might not be enough.
Interestingly, though, the way it writes Japanese, where e.g. it adds a subject to sentences where natives wouldn't, does feel like it could be translating from another language.
At some point, I informed it that Abe was killed this year, to which it replied that it's not true but that it could be missing information, inviting me to provide more, which I did, after which it assertively told me Abe was very well alive, while still adding that it might have some information missing...
ChatGPT's training process of ingesting many articles with a context window of a few thousand words means it often can't see the date that an article or bit of text was written.
So as far as it's concerned, when learning from an article written in 2006, the prime minister of Japan really is Abe right now.
I suspect this could be fixed by having an extra training input of 'date this text was written', and then when generating an output, specify that you want the output to be written as-if the writer is writing it in a specific year.
> It should be generally better across a wide range of topics and has improved factuality.
Maybe I am being naive, but this is not telling too much as there is no metric attached to it, so the average user cannot (at least immediately) gauge how impactful this improvement is.
I think it would be beneficial to have benchmarks for assessing the factuality of large language models. Now, several open questions are:
- What is the minimal (necessary) set of questions it must contain?
ChatGPT Release Notes (Jan 9, 2021)
We are excited to announce the following updates to ChatGPT:
Model improvements: We have made several key improvements to the underlying ChatGPT model that will result in better performance across a wide range of topics. Specifically, we have fine-tuned the model to improve its factuality, ensuring that it provides more accurate and reliable responses to user queries.
Stop generating: In response to user feedback, we have implemented a new feature that allows users to stop generating ChatGPT's response at any time.
ChatGPT Release Notes (Dec 15, 2021)
We are excited to announce the following updates to ChatGPT:
General performance: Among other improvements, users will notice that ChatGPT is now less likely to refuse to answer questions.
Conversation history: We have introduced conversation history functionality, allowing users to view past conversations, rename saved conversations, and delete conversations they no longer wish to keep. This feature is being gradually rolled out to all users.
Daily limit: To ensure a high-quality experience for all ChatGPT users, we are experimenting with a daily message cap. Users included in this experiment will be presented with an option to extend their access by providing feedback to ChatGPT.
Updated version: Users can confirm they are using the updated version of ChatGPT by looking for "ChatGPT Dec 15 Version" at the bottom of the screen.
We hope you enjoy these new updates and welcome your feedback and suggestions for future improvements.
No, I can't log in either, probably due to everyone rushing to try out the new update. I was given a form to give my email address that they'll notify me at when it's more available.
That currently appears to be a different product "WebGPT", rather than a ChatGPT update.
There are two ways of giving ChatGPT access to events that post-date the time it's training set was collected.
1) Fine tune (or re-train - but very expensive) on more recent web pages, focusing on ones likely to contain current events, such as news sources.
2) Like WebGPT, train ChatGPT to generate queries for information that can be fed into a variety of external "query response" systems such as web browser, database, mathematica, etc.
I guess one difficulty with this second more promising approach, is when would ChatGPT emit it's own naive "sequence of words" response, and when would it emit the response of this indirect query lookup, and/or how would it combine the hopefully factually correct query response into the factualness-unknown word soup it inherently generates.
Although this is a more promising approach in terms of having ChatGPT behave they way people want/expect it to, it's a bit ironic in that it really means ChatGPT is itself dumber - it's just being used as a front end for search, essentially "aligned" to emit "what query is most likely to satisfy this sequence of words prompt", so really exposing itself for what it is - a language model that knows about words/language, but not about facts (or reasoning, etc).
Of course the real value of some (to be developed) future AI that actually has reasoning capabilities would be to combine multiple sources of data to deduce facts, employ chains of reasoning, and answer novel questions in a logic/fact-based manner.