> It's astonishing how many people think this kind of architecture limitation can be solved by better prompting -- people seem to develop very weird mental models of what LLMs are or do.
Wait till you hear about Study Mode: https://openai.com/index/chatgpt-study-mode/ aka: "Please don't give out the decision straight up but work with the user to arrive at it together"
Next groundbreaking features:
- Midwestern Mode aka "Use y'all everywhere and call the user honeypie"
- Scrum Master mode aka: "Make sure to waste the user' time as much as you can with made-up stuff and pretend it matters"
- Manager mode aka: "Constantly ask the user when he thinks he'd be done with the prompt session"
Those features sure are hard to develop, but I am sure the geniuses at OpenAI can handle it! The future is bright and very artificially generally intelligent!
Wait till you hear about Study Mode: https://openai.com/index/chatgpt-study-mode/ aka: "Please don't give out the decision straight up but work with the user to arrive at it together"
Next groundbreaking features:
- Midwestern Mode aka "Use y'all everywhere and call the user honeypie"
- Scrum Master mode aka: "Make sure to waste the user' time as much as you can with made-up stuff and pretend it matters"
- Manager mode aka: "Constantly ask the user when he thinks he'd be done with the prompt session"
Those features sure are hard to develop, but I am sure the geniuses at OpenAI can handle it! The future is bright and very artificially generally intelligent!