No, but a lot of AI-adjsuted wordings have the very idiosyncratic AI-style that is prevalent in the AI-slop that is everywhere, and that style has quickly become associated with writing that is generally void of content and insight. So it is natural to get gut-reactions to the typical phrasings that have become associated with AI.
As a business/product person it's pretty addictive (gotta watch the token spend!). This week with a few workmates we had an idea in a pub, on train back I wrote a short spec and fired up some agents to start building. The next day, by evening, whist doing our day jobs we had a functional application working, not a poc. Few years ago this would be unthinkable.
I thought the same about watching people play video games but that's clearly a thing! This might be useful for educating people on how to use these new tools, perhaps those not in engineering but product, UX, less familiar with CLIs.
I have split my life into things I can control and I can't control. Both at home and at work. There are things that sit on the line of course but global affairs are firmly non-controlable in near term at least, baring any elections I can express my view through.
I do get why people are generally struggling more, the concept of stability in many senses of that word seems to be gone.
The way that people interact inside of knowledge companies to get things done is itself the fabric of how it operates. A recent SaaS CEO piece here calls is the 'language games'.
One of the things I have started to realize whilst building apps using AI is that you get a bit indulgent when it comes to features. So in my toy project I wanted all sorts of quality of life bells-and-whistles. If this were a proper enterprise application there would have a been a review and priortization process where the merits would be weighted against the cost. In this case the cost is tokens, so fraction of FTE cost. So I just type and it builds. Whilst this is satisfying I am getting the unnerving sense its not going to be good for me (or the toy app) in the long run.
Other comments have mentioned upstream delays in deciding what features to build now that teams can deliver faster - but you bring up another issue around downstream “understanding debt”. How can sales and marketing sell this stuff if they don’t even know what everything does? How does customer service support it? Sure you can just slop-together documentation, blogs, etc but what good are all these extra features if end-users don’t know or just don’t care about them?
Prioritisation due to cost of engineering forces you to think hard about what to build (and thus not to build). If that calculation has now radically changed, which it has, then that presents a whole new risk that has not been thought about extensivley yet but I suspect will be. It might be that customers can develop the thing they want (that say not other customer does) themselves through well defined interfaces but then who supports and maintains that code?
It's an idyllic dream, as long as you don't need to make money!
No one who touches beans makes money. Only the largest multinational traders and cafes. The money from the specialty coffee chain goes to landlords, shipping companies, and equipment manufacturers.
Of course you'll need to live in the tropics too.
For learning about coffee production, the podcast "Making Coffees" by Lucia Solis is excellent (and industry award winning).
I have little doubt where things are going, but the irony of the way they communicate versus the quality of their actual product is palpable.
Claude Code (the product, not the underlying model) has been one of the buggiest, least polished products I have ever used. And it's not exactly rocket science to begin with. Maybe they should try writing slightly less than 100% of their code with AI?
More generally, Anthropic's reliability track record for a company which claims to have solved coding is astonishingly poor. Just look at their status page - https://status.claude.com/ - multiple severe incidents, every day. And that's to say nothing of the constant stream of bugs for simple behavior in the desktop app, Claude Code, their various IDE integrations, the tools they offer in the API, and so on.
Their models are so good that they make dealing with the rest all worth it. But if I were a non-research engineer at Anthropic, I wouldn't strut around gloating. I'd hide my head in a paper bag.
I find the GitHub issue experience particularly hellish: search for my issue -> there it is! -> only comment "Found 3 possible duplicate" Generated with Claude Code -> go to start.
I am constantly amazed how developers went hard for claude-code when there were and are so many better implementations of the same idea.
It's also a tool that has a ton of telemetry, doesn't take advantage of the OS sandbox, and has so many tiny little patch updates that my company has become overworked trying to manage this.
Its worst feature (to me at least), is the, "CLAUDE.md"s sprinkled all over, everywhere in our repository. It's impossible to know when or if one of them gets read, and what random stale effect, when it does decide to read it, has now been triggered. Yes, I know, I'm responsible for keeping them up to date and they should be part of any PR, but claude itself doesn't always even know it needs to update any of them, because it decided to ignore the parent CLAUDE.md file.
Sometimes the agent (any agent, not just Claude — cursor, codex) would miss a rule or skill that is listed in AGENTS.md or Claude.md and I'm like "why did you miss this skill, it's in this file" and it's like "oh! I didn't see it there. Next time, reference the skill or AGENTS.md and I'll pick it up!"
Like, isn't the whole point of those files to not have to constantly reference them??
"Coding" is solved in the same way that "writing English language" is solved by LLMs. Given ideas, AI can generate acceptable output. It's not writing the next "Ulysses," though, and it's definitely not coming up with authentically creative ideas.
But the days of needing to learn esoteric syntax in order to write code are probably numbered.
OK, but seriously... if Anthropic is on the "best" path, aside from somehow nuking all AI research labs, an IPO would be the most socially responsible thing that they could do. Right?
https://www.economicsnetwork.ac.uk/archive/keynes_persuasion...
reply