Apple’s experience has almost nothing to do with “harnessing” LLMs, and everything to do with their wildly misjudged assumption they could run a viable model on a phone. Useful LLMs require their own power plants and can only be feasibly run in the cloud, or in a limited manner on powerful equipment like a 5090. Apple seems to have misunderstood that the “large” in large language model isn’t just a metaphor.
“still far from being able to do the latter”
These models have been in wide use for under three years. AI IDEs barely a year. Gemini 2.5 Pro is shockingly good at architecture if you make it into a conversation rather than expecting a one-shot exercise. I share your native skepticism, but the pace of improvement has taken me aback and made me reluctant to stake much on what LLMs can’t do. Give it 6 months.
Programmers who code interesting things likely shouldn’t worry. The legions who code voluminous but shallow corporate apps and glue might be more concerned.
Please oh please don’t post articles like this on hacker news. Yes these are tragic stories but they are amply covered in thousands of other online forums. HN has been an oasis free from conflict-generating political articles like this. Let HN be one place dedicated to news for — and about — hackers. If politics is your thing, Reddit is the place for you.