"Apple is smart. If the AI capex cycle flattens in late '27 as models hit diminishing returns, does Apple regain pricing power simply by being the only customer that can guarantee wafer commits five years out?"
That's the take I would pursue if I were Apple.
A quiet threat of "We buy wafers on consumer demand curves. You’re selling them on venture capital and hype"
Why should that change TSMC decision making even a little?
The reality is that TSMC has no competition capable of shipping an equivalent product. If AI fizzles out completely, the only way Apple can choose to not use TSMC is if they decide to ship an inferior product.
A world where TSMC drains all the venture capital out of all the AI startups, using NVidia as an intermediary, and then all the bubble pops and they all go under is a perfectly happy place for TSMC. In these market conditions they are asking cash upfront. The worst that can happen is that they overbuild capacity using other people's money that they don't have to pay back, leaving them in an even more dominant position in the crash that follows.
Nvidia is not a venture capital outlet. They are a self-sustaining business with several high-margin customers that will buy out their whole product line faster than any iPhone or Mac.
From TSMC's perspective, Apple is the one that needs financial assistance. If they wanted the wafers more than Nvidia, they'd be paying more. But they don't.
But Nvidia has had high-profile industry partners for decades. Nintendo isn't "venture capital and hype" nor is PC gaming and HPC datacenter workloads.
But Nvidia wasn't able to compete with Apple for capacity on new process nodes with Nintendo volumes (the concept is laughable; compare Apple device unit volumes to game console unit volumes). What has changed in the semiconductor industry is overwhelming demand for AI focused GPUs, and that is paid for largely with speculative VC money (at this point, at least; AI companies are starting to figure out monetization).
When I started out as a sysadmin it was all shell and glue and different syntax on the 8 different flavours of *NIX I worked on between '94 and '97, then I found Perl suddenly you could actually build things that felt "real". It took me straight into web application development by '98, and I'm not sure I would have stayed in this field had it not existed (I was also working in neuro-diagnostics at the time and might have stayed there).
I saw some really elegant stuff written in Perl.
I also saw some absolutely unhinged, impossible-to-maintain garbage.
A large percentage of my work is peripheral to info security (ISO 27001, CMMC, SOC 2), and I've been building internet companies and software since the 90's (so I have a technical background as well), which makes me think that I'm qualified to have an opinion here.
And I completely agree that LLMs (the way they have been rolled out for most companies, and how I've witnessed them being used) are an incredibly underestimated risk vector.
But on the other hand, I'm pragmatic (some might say cynical?), and I'm just left here thinking "what is Signal trying to sell us?"
I didn't mean to imply a conflict of interest, I'm wondering what product or service offering (or maybe feature on their messaging app) prompted this.
No other tech (major) leaders are saying the quiet parts out loud right, about the efficacy, cost to build and operate or security and privacy nightmares created by the way we have adopted LLMs.
Whittaker’s background is in AI research. She talks a lot (and has been for a while) about the privacy implications of AI.
I’m not sure of any one thing that could be considered to prompt it. But a large one is the wide-deployment of models on devices with access to private information (Signal potentially included)
Maybe it's not about gaining something, but rather about not losing anything. Signal seems to operate from a kind of activism mindset, prioritizing privacy, security, and ethical responsibility, right? By warning about agentic AI, they’re not necessarily seeking a direct benefit. Or maybe the benefit is appearing more genuine and principled, which already attracted their userbase in the first place.
Exactly, if the masses cease to have "computers" any more (deterministic boxes solely under the user's control), then it matters little how bulletproof signal's ratchet protocol is, sadly.
Signal is conveying a message of wanting to be able to trust your technology/tools to work for you and work reliably. This is a completely reasonable message, and it's the best kind of healthy messaging: "apply this objectively good standard, and you will find that you want to use tools like ours".
Since Signal lives and dies on having trust of its users, maybe that's all she is after?
Saying the quiet thing out loud because she can, and feels like she should, as someone with big audience. She doesn't have to do the whole "AI for everything and kitchen sink!" cargo-culting to keep stock prices up or any of that nonsense.
How can a service like Signal live and die by the trust of its users when they openly lie to them. Signal refuses to update their privacy policy to warn users that they store sensitive information in the cloud (and more recently, even the contents of user's messages in some cases).
Lying to users by saying that signal doesn't collect or store anything when they actually do doesn't sound like something a company who expected you to trust them would do. It sounds like something a company might do if they needed a way to warn people away from using a service isn't safe to use while under a gag order.
Signal has been trying to tell us for years now that their service is already compromised. That's why they've refused to update their privacy policy after they started keeping sensitive data in the cloud and even after they started keeping message content for some users.
I'd argue that Signal is trying to sell sanity at their own direct expense, during a time when sanity is in short supply. Just like "Proof of Work" wasn't going to be the BIG THING that made Crypto the new money, the new way to program, 'Agents' are another wet squib. I'm not claiming that they're useless, but they aren't worth the cost within orders of magnitude.
I'm really getting tired of people who insist on living in a future fantasy version of a technology at a time when there's no real significant evidence that their future is going to be realized. In essence this "I'll pay the costs now for the promise of a limitless future" is becoming a way to do terrible things without an awareness of the damage being done.
It's not hard, any "agent" that you need to double check constantly to keep it from doing something profoundly stupid that you would never do, isn't going to fulfill the dream/nightmare of automating your work. It will certainly not be worth the trillions already sunk into its development and the cost of running it.
I bought a license for Pixelmator Pro a couple of years ago. IIRC it cost 30 or 40 EUR. I don't use it much, but it is unlikely you're going to need all of that software.
I could see using an iPad for automation, triggered by midi, but I use an Air for that (and even if I used an my Pro, I still have to use a USB C hub because for some reason Apple things 1 (or 2) USB ports is enough. Sigh.
That's the take I would pursue if I were Apple.
A quiet threat of "We buy wafers on consumer demand curves. You’re selling them on venture capital and hype"
reply