The math here is mixing categories. The token calculation for a single 1-GW datacenter is fine, but then it gets compared to the entire industry’s projected $8T capex, which makes the conclusion meaningless. It’s like taking the annual revenue of one factory and using it to argue that an entire global build-out can’t be profitable. On top of that, the revenue estimate uses retail GPT-5.1 pricing, which is the absolute highest-priced model on the market, not what a hyperscaler actually charges for bulk workloads. IBM’s number refers to many datacenters built over many years, each with different models, utilization patterns, and economics. So this particular comparison doesn’t show that AI can’t be profitable—it’s just comparing one plant’s token output to everyone’s debt at once. The real challenges (throughput per watt, falling token prices, capital efficiency) are valid, but this napkin math isn’t proving what it claims to prove.
> but then it gets compared to the entire industry’s projected $8T capex, which makes the conclusion meaningless.
Aren't they comparing annual revenue to the annual interest you might have to pay on $8T? Which the original article estimates at $800B. That seems consistent.
Did you listen to the recent interview with Ben Bajarin? I thought that interview alone justified the subscription. Curious as to whether anyone else felt the same.
Fantastic interview. Hard to get much info from inside the world Bajarin was speaking of. Notable how everyone is saying they can't get capacity for the tokens they're trying to serve.
The current AI wave has been compared (by sama) to electricity and sometimes transistors. AI is just going to be in all the products. The trillion dollar question is: Do you care what kind of electricity you are using? So, will you care what kind of AI you are using.
In the last few interviews with him I have listened to he has said that what he wants is "your ai" that knows you, everywhere that you are. So his game is "Switching Costs" based on your own data. So he's making a device, etc etc.
Switching costs are a terrific moat in many circumstances and requires a 10x product (or whatever) to get you to cross over. Claude Code was easily a 5x product for me, but I do think GPT5 is doing a better job on just "remembering personal details" and it's compelling.
I do not think that apps inside chatgpt matters to me at all and I think it will go the way of all the other "super app" ambitions openai has.
If you take that at face value, shouldn't every investor just back Google or Apple instead? Like, OpenAI is, at best, months ahead when it comes to model quality. But for them to get integrated into the lives of people in the way all their competitors are would take years. If the way in which ai becomes this ubiquitous trillion dollar thing involves making it hyper-personalized, is there any way in which OpenAi is particularly well positioned to achieve that?
> I do think GPT5 is doing a better job on just "remembering personal details" and it's compelling.
Today I asked GPT5 to extract a transcript of all my messages in the conversation and it hallucinated messages from a previous conversation, maybe leaked through the memory system. It cannot tell the difference. Indiscriminate learning and use of memory system is a risk.
I mean don't you think this is is more analogous to the introduction of computing than electricity? If you told people in 1960 that there would be supercomputers inside people's refrigerators do you think they would have believed you?
And most people actually don't care what CPU they have in their laptop (enthusiasts still do which i think continues to match the analogy), they care more about the OS (chatGPT app vs gemini etc).
can the world and tech survive fruitfully without AI? yes. can the world and tech survive without electricity and transistors - not really. the modern world would come crashing down if transistors and electricity disappeared overnight. if AI disappeared over night the world might just be a better place.
This may not be your cup of tea but I'm using stage manager on Mac for the first time after hating on it for years, for exactly this. I've got 3 monitors and they all have 4-6 stage manager windows. I bundle the terminals, web browser and whatever into these windows. Easily switching from project to project.
I think there are a lot of great things about Mercury - I've used them for a few companies and think they are leading the way on an innovative feature set. It's very easy to create unlimited debit cards and limit them in a variety of ways to name just one small feature. Very handy.
That said, they lost a $250,000 incoming wire critical to my business and I couldn't get a hold of anyone until I started tweeting about it a week later and the CEO responded. The money showed up with no explanation, ever. We stopped using them for critical money flows after that.
I am a massive caffeine drinker. Like many of us, I monitor my sleep religiously so I have an anecdote. Late afternoon espresso or hot coffee is usually quite sleep effecting.
However - I have found that cold brew does not bother my sleep! At least the brand that I drink. Very strange, but awesome. Cold brew does not have the acidity of hot coffee which is a double bonus if you get acid reflux at night from poor eating or drinking habits. Give it a whirl.
I think it will wildly vary depending on how regular your life cycle is.
As an anecdote I also tried tracking my sleep, only to realize:
- consumer trackers are wildly inaccurate (best we can do is compare them to a "medical grade" reference tracker, which might be accurate or not, who knows)
- there was so many other things going on every day, pinning it down to even two or three factors was just impossible (e.g. I drink more coffee when I have more time to make it, which is related to my stress level and work volume etc.)
- watch data were a PITA to export and analyze separately. I did it twice or thrice and didn't bother after that.