Just got this email from OpenAI:
Subject: GPT Store Launch
Dear GPT Builder,
We want to let you know that we will launch the GPT Store next week. If you’re interested in sharing your GPT in the store, you’ll need to:
- Review our updated usage policies and GPT brand guidelines to ensure that your GPT is compliant
- Verify your Builder Profile (settings > builder profile > enable your name or a verified website)
- Publish your GPT as ‘Public’ (GPT’s with ‘Anyone with a link’ selected will not be shown in the store)
Thank you for investing time to build a GPT.
- ChatGPT Team
1. The GPT builder itself didn't feel like it was a well-tuned prompt (i.e., the prompt they use to guide prompt creation). It created long-winded prompts that left out information and didn't pay attention to what I said. Anything I enter into the GPT builder interface is probably very important!
2. The quotas are fairly low, and apply to testing. I was only able to do maybe 10 minutes of playtesting before I ran out of quota.
3. There's no tools to help with testing, it's all just vibes. No prompt comparisons.
4. The implied RAG is entirely opaque. You can upload documents, and I guess they get used...? But how? The best I could figure out was to put text into the prompt telling GPT to be very open about how it used documents, then basically ask it questions to see if it understood the content and purpose of the documents I uploaded.
5. There's no extended interface outside of the intro questions. No way to emit buttons or choices, just the ever-present text field.
6. There's no hidden state. I don't particularly want impossible-to-see state, but a powerful technique is to get GPT to make plans or internal notes as it responds. These are very confusing when presented in the chat itself. In applications I often use tags like <plan>...</plan> to mark these, which is compatible with the simple data model of a chat.
7. There's no context management. Like hidden state, I'd like to be able to mark things as "sticky"; things that should be prioritized when the context outgrows the context window.
These are all fixable, though I worry that OpenAI's confidence in AI maximalism will keep them from making hard features and instead they just rely on GPT "getting smarter" and magically not needing real features.