$99/month lol.
I have Perplexity, OpenAI, Claude and Cursor subscription and I end up paying way less than $99/month.
Clearly you haven't done any research on price.
Aider, Cline are open source, in not sure why someone would subscribe to it unless it's the top model on http://swebench.com/
I tried it on two of my git repositories, just to see, if it could do a decent commit summary. I was very pleasantly surprised with the good result.
I was unpleasantly surprised, that this already cost me 175 credits. If I extrapolate this over my ~100 repositories, that would already put me at 8750, just to let it write a commit message for release day. That is way out of free range and basically would eat up most of the $99 I would have to spend as well. My subscription price for cody is $8 for a month. Pricing seems just way off.
Been using Cursor since launch. Really frustrating how they charge per message (500/mo) instead of by token usage. Like, why should a one-line code suggestion cost the same as refactoring an entire class? Plus it's been losing context lately and making rookie mistakes.
Tried Zed AI but couldn't get into it - just doesn't feel as smooth as Cursor used to. GitHub Copilot is even worse. Feels like they're so obsessed with keeping that $10/month price point that they've gutted everything. Context window is tiny and the responses are barebones.
I used Cursor for many months, but found that I couldn't deal with how slow and workflow-interrupting VSCode feels, so I went back to Zed.
I tried out and abandoned Zed AI, but I've found that Zed + aider is a really excellent setup – for me, at least.
For smaller things, Zed's inline Copilot setup works (nowadays) just as well as Cursor's and for things that are even a little bigger than tiny, I pull up aider and prompt the change the exact same way that I did with Cursor's Composer before.
I'm an odd choice because I'm a pretty expert-level programmer, but I find that aider with o1 is helpful for things hairy, expansive and tedious things that would otherwise irritate.
o1 models might use multiple methods to come up with an idea, only one of them might be correct, that's what they show in ChatGPT. So it just summarises the CoT, does not include the whole reasoning behind it.
With Google's 1 million token and Sonnet 3.5's 200,000 token limit, is there any advantage of using this over just uploading the pdf files and ask questions about it.
I was under the impression that you will get more accurate results by adding the data in chat.
You are 100% right about using unified diffs to overcome lazy coding. Cursor.sh has also implemented unified diffs for code generation. You ask it to refactor code, it writes your usual explanation but there's a apply diff button which modifies the code using diff and I've never seen placeholder code in it.
There are YouTube videos that go into detail. From what I can remember, it first creates an embedding of your full code, it then refers to your open file and the files next to your current tab, it then extracts the most useful code related to your question.
reply