>its becoming very likely the user is becoming delusional or may engage in dangerous behavior.
Talking to AI might be the very thing that keeps those tendencies below the threshold of dangerous. Simply flagging long conversations would not be a way to deal with these problems, but AI learning how to talk to such users may be.
As an avid coffee-lover, I am sympathetic to the efforts to make coffee-brewing more of a science than an art. After all, it seems that doing so improves science just as much! https://today.ucsd.edu/story/coffee-and-turbulence
That said, I am less sympathetic to the concept of a "perfect cup", which seems to make coffee an exclusively competitive endeavour. I mean, why not enjoy coffee-brewing as one might enjoy thinkering?
People have widely varying tastes in coffee (as in everything). There is no "perfect cup." But I think they might be talking about improving extraction from the ground coffee. This might not be so relevant to the home brewer, but could be useful for, say, manufacturers of coffee machines.
I see the hyperbole is the point, but surely what these machines do is to literally predict? The entire prompt engineering endeavour is to get them to predict better and more precisely. Of course, these are not perfect solutions - they are stochastic after all, just not unpredictably.
Prompt engineering is voodoo. There's no sure way to determine how well these models will respond to a question. Of course, giving additional information may be helpful, but even that is not guaranteed.
Also every model update changes how you have to prompt them to get the answers you want. Setting up pre-prompts can help, but with each new version, you have to figure out through trial and error how to get it to respond to your type of queries.
I can't wait to see how bad my finally sort-of-working ChatGPT 5.1 pre-prompts work with 5.2.
It definitely isn’t voodoo, it’s more like forecasting weather. Some forecasts are easier to make, some are harder (it’ll be cold when it’s winter vs the exact location and wind speed of a tornado for an extreme example). The difference is you can try to mix things up in the prompt to maximize the likelihood of getting what you want out and there are feasibility thresholds for use cases, e.g. if you get a good answer 95% of the time it’s qualitatively different than 55%.
No, it's not. Nowadays we know how to predict the weather with great confidence. Prompting may get you different results each time. Moreover, LLMs depend on the context of your prompts (because of their memory), so a single prompt may be close to useless and two different people can get vastly different results.
Around May, Altman said to FT that his job was the "most important job maybe in history" (FT: https://www.ft.com/content/a3d65804-1cf3-4d67-ac79-9b78a10b6...). He did come back from brink of death before as well. But steering OpenAI into an "ecosystem" rather than a focusing on the product when you are up against the likes of Google? Seems like cashing in on the hype too early.
> Altman said to FT that his job was the "most important job maybe in history"
The lack of self-awareness in some experienced senior execs (who should know better) continues to stun me. Even if I suspected that sentiment about the importance of the job I was doing might be correct, I'd never say it - especially in an on-the-record media interview. Two reasons: 1. Any statement like that has a high probability of coming across very poorly, 2. It's a proposition that can only be judged in retrospect by people external to the context.
Frankly, I think the odds are at least 50/50 that Open AI will someday be considered the Pets.com (or even Enron) of early AI.
> Open AI will someday be considered the Pets.com (or even Enron) of early AI.
Very unlikely.
Pets.com never produced anything of note. OpenAI was the leader in AI for 3 years. Maybe now Gemini 3 and Opus 4.5 are slightly ahead, but that's a bit subjective. For all practical purposes OpenAI is in a 3-way tie with Google and Anthropic.
Google would love to sink them and Anthropic, and remain the monopoly in the AI space, like they are in the search space.
But Microsoft, Nvidia, AMD, Oracle, and a few others, would be much less happy with a monopolistic Google, and they'll do their best to help OpenAI and Anthropic stay afloat (as long as it does not cost their business to much to do so).
As for Enron, I don't think another Enron is likely to happen, not at this level of visibility. People simply don't like going to prison. Could one person try to fudge some numbers? Maybe. Elizabeth Holmes comes to mind. But Theranos did not have 800 million active users, did not have every analyst in the world scrutinizing them. At the level of OpenAI, you would need many people to be in a conspiracy to fudge numbers, and when there's many people, there's many possibilities for someone to blow the whistle.
True, but the horses' population started (slightly) rising again when they went from economic tools to recreational tools for humans. What will happen to humans?
Talking to AI might be the very thing that keeps those tendencies below the threshold of dangerous. Simply flagging long conversations would not be a way to deal with these problems, but AI learning how to talk to such users may be.
reply