I’ve gone through $25 USD in API credits in a single afternoon with Claude Code (I love it, but that thing is thirsty for dollar bills).
I’ve been reluctant to try this sort of thing out because it’s fairly trivial for them to detect this and potentially come down with the ban hammer. I’d rather not risk it.
Anyway, I’ve found myself switching back to Aider as of late because it is much more conservative in how it uses its token budget.
IMO, the big problem with Aider is that it's not agentic. This is good because it means costs are down, but most of the edit-test-fix loop magic in coding agents comes from the agent loop.
I've been using Claude with the MCP servers daily, and get put on pause a few times a day due to my heavy usage.
However, I do hope they do not plan to use the pricing that they are using for Claude max, as a single prompt usually generates about 50 tool calls for my use case. (In max this would cost me $5.05). I'll easily burn $50 to $100 per hour, and I haven't even added all the tools I'd like to use yet...
If it gets expensive, I'll probably only use it for architectural work, and use my own AI LLM for more tactical tasks.
This will be slower and less powerful, but we already have an AI server for image analysis, so it makes sense to use it.
Maybe, maybe not. Once something stops being free the human brain starts setting expectations (at least mine does.) Sure that last conversation “only” cost $0.08 but since I’m now paying it better damn well work and not fuck up my code. And even though it was only $0.08 I’ll be cranky that I have to undo its changes because it went down a rabbit hole and now need to reprompt it or ask it what the hell it thinks it’s doing… costing yet another $0.08.
Sure it’s a few pennies but it does add up. I’m sure there is some research or term / explanation for this phenomenon.
I was thinking cursor pricing. It becomes a whole different ballgame when you plug these tools into the providers API and pay by the token. Suddenly you really start evaluating how much value you are actually getting out of the tool!
It's not in any sort of format to do this kind of analysis unfortunately. I'm also missing some data b/c I throw away certain kinds of datasets that are not useful for me. I can probably write some scripts to diff my archives with the current data.gov and see what's missing, but it won't be "complete". But it might still be useful...
I did however just write a Python script to pull data.gov from archive.org and check the dataset count on the front page for all of 2024, here are the results:
As you can see, there were multiple drops on the order of ~10,000 during 2024. So it's not that unusual. There could be something bad going on right now, but just from the numbers I can't conclude that yet.
(Specifically it takes the first snapshot of every Wednesday of 2024).
If I get around to re-formatting my archives this week, I'll follow up on HN :).
This makes me sad that my children won’t be able to visit some of my favorite camp sites.