I tried it with cursor-agent, their cli - and it generated better code than expected. YMMV. It was more thoughtful and strategic than the other frontier models.
Planning was ok for me, much slower than Sonnet, but comparable. But some of the code it produces is just terrible. Maybe the routing layer sends some code-generation tasks to a much smaller model- but then I don't get why it's so slow!
The only thing that seems better to me is the parallel tool calling.
Ben, we've had private conversations about this previously. I don't see any VC money grab nor am I aware of any.
Building a product that we've dreamed of building is not wrong. Making money does not need to be evil. I, and the folks who worked tirelessly to make Ollama better will continue to build our dreams.
interesting but i have thought about it. rarely is ours as well. All my code is original but based on my past experiences from learning, thinking about it, and improving it based on new knowledge i know. my 2cents.
This is from their claude-code guide: "Claude Code is intentionally low-level and unopinionated, providing close to raw model access without forcing specific workflows. This design philosophy creates a flexible, customizable, scriptable, and safe power tool. While powerful, this flexibility presents a learning curve for engineers new to agentic coding tools—at least until they develop their own best practices". The agent is what makes the claud-code as good as it is. By not using it, you are using the model that is a hit or miss on several aspects a programmer would need.
Been using it for a couple days - The integration fixed the gap that required me to open the files for viewing updates, and changes made in real-time as compared to the terminal mode, which did things behind the scenes, and you had no idea what its doing. the series of nonsensical (but funny) names (Pondering, Twerking, Juggling, etc.) it gives are not useful after its initial fancy wears off..
Hmm...It seems that humans should be less interested in such things? Making Makefiles readable by human is less needed in the context of LLMs needs to know about it more than us no?
Ummm, no one has ANY idea how our brains work at the level of consciousness. Full Stop.
Please lets all try to resist this temptation.
Physiology, neural dynamics, sensory input and response, all lead to levels of functionality that exist even in fruit flies. Of which we have practically 0 comprehensive knowledge.
Higher order processing, cognition, consciousness, and conscious thought processes all build on top of these fundamental biological processes and are far Far FAR from having any level of understanding currently, at all.
Human cognition has nothing in common with LLM operation. They are at best 10,000 monkeys typing from training sequences (that were originally generated by humans). There is a new emerging issue with so much content of the internet being LLM generated text, which becomes included into training data, that the risk of model collapse is increasing.
Its very easy to anthropomorphize what's coming out of a language model, it's our job as s/w engineers to try to understandably explain what's actually happening to the lay public. The LLM is not "understanding" anything.
Let's do our best to resist to urge to anthropomorphize these tools.