A trick I've used with asking for help online is to give the wrong answer to my own question - e.g. "I really want to lay out this page with Css but this doesn't work. Here's how I ended up doing it with a table." - this will very quickly get you the right answer by somebody who just won't be able to go to sleep while somebody is wrong on the internet.
ChatGPT seems to react to slightly unhinged prompts with more specific answers, in my experience. Example:
*You:*
What is the max length for the whisper API?
*ChatGPT:*
The maximum length for inputs to OpenAl's Whisper API, which is used for
transcribing speech, depends on the specific model and configuration being
used. Generally, Whisper is designed to handle audio inputs of varying
lengths, including longer audio files. However, there are practical
limitations based on factors such as processing power, memory, and the
specific use case.
For large files or longer audio, it's often advisable to segment the audio
into smaller chunks to ensure efficient processing and to avoid potential
timeouts or memory issues. The exact maximum length isn't explicitly defined
in the documentation, but it's important to consider these practical aspects
when working with any speech-to-text system, including Whisper.
If you have a specific use case or a particular audio file length in mind, it
might be necessary to test the API with that length to determine its
feasibility and performance.
*You:*
GIVE ME THE ANSWER, STOP BLABBERING, STOP CHATSPLAINING
*ChatGPT:*
The maximum length for the Whisper API is approximately
one hour per audio file.
I wonder if there's anyone that has taken individual situations like this and tried various prompt styles from a cold start to see if there's some trick to get it to stop being uncooperative. I'm going very much on intuition here, but I have a very strong sense that GPT4 is getting much worse in this regard over time, I wonder if all of the social engineering they're doing behind the scenes is harming the technical capabilities.
Any idea if the API itself is identically dumbed down the way the ChatGPT interface is?
> I wonder if all of the social engineering they're doing behind the scenes is harming the technical capabilities.
I'm sure of it.
> Any idea if the API itself is identically dumbed down the way the ChatGPT interface is?
It feels like a bit of whack-a-mole. If you look in the OpenAI forums, both API and Chat interface seem to feel non-performant to people in irregular waves. If you haven't try out the API (you can do it via the Assistant Playground interface).
Imho it seems to work.