Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Heya, as with all language models, if you open the conversation with antagonistic questions, the rest of the conversation thread becomes tainted. If you ask most of your questions in a new thread, almost everything you ask here will be answered. See our model card for more prompting guidance.


Hi Jason, I don't think my conversation was antagonistic, I was just probing. I expected to hear Claude or Claude v2 or 2.1 etc. I then thought it was strange that it couldn't answer any of what seemed to be specific questions.

Here is a Vanilla GPT with "You are a helpful assistant" instructions answering the questions easily: https://chat.openai.com/share/b6a60a9d-4b38-4b06-953f-bce4f8...

Now I know, comparing to GPT-4 is a little unfair. I like Claude and I want it to do great, but the first step is accepting that it (for now) lags behind in terms of capabilities.

The question is: how do we get it to the point where it is able to answer randomly, arbitrary questions like "Tell me something that happened in 1990." etc.


What is antagonistic about that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: