Hacker News new | past | comments | ask | show | jobs | submit | dkersten's comments login

Yet another vscode fork…


Huh? I'm a bit confused by your comment:

1. Zed has been working great for me for ~1.5 years while I ignored its AI features (I only started using Zed's AI features in the past 2 weeks). Vim keybindings are better IMHO than every other non-vim editor and the LSP's I've used (typescript, clangd, gleam) have worked perfectly.

2. The edit prediction feature is almost there. I do still prefer Cursor for this, but its not so far ahead that I feel like I want to use Cursor and personally I find Zed to be a much more pleasant editor to use than vscode.

3. When you switch the agent panel from "write" to "ask" mode, its basically that, no?

I'm not into vide coding at all, I think AI code is still 90% trash, but I do find it useful for certain tasks, repetitive edits, and boilerplate, or just for generating a first pass at a React UI while I do the logic. For this, Zed's agent feature has worked very well and I quite like the "follow mode" as a way to see what the AI is changing so I can build a better mental model of the changes I'm about to review.

I do wish there was a bit more focus on some core editor features: ligatures still don't fully work on Linux; why can't I pop the agent panel (or any other panel for that matter) into the center editor region, or have more than one panel docked side by side on one of the screen sides? But overall, I largely have the opposite opinion and experience from you. Most of my complaints from last year have been solved (various vim compatibility things), or are in progress (debugger support is on the way).


You can set up tasks to run cmake for you: https://zed.dev/docs/tasks

Personally, I just use the terminal for my build tools and Zed talks to clangd just fine for autocomplete etc.


Been using Zed as my daily driver (without AI) since sometime in late 2023 when I decided I wanted to ditch vscode for mention leaner and faster. Love it.

I switched to cursor earlier this year to try out LLM assisted development and realised how much I now despise vscode. It’s slow, memory hungry, and just doesn’t work as well (and in a keyboard centric way) as Zed.

Then a couple of weeks ago, I switched back to Zed, using the agents beta. AI in Zed doesn’t feel quite as polished as cursor (at least, edit predictions don’t feel as good or fast), but the agent mode works pretty well now. I still use cursor a little because anything that isn’t vscode or pycharm has imho a pretty bad Python LSP experience (those two do better because they use proprietary LSP’s), but I’m slowly migrating to full stack typescript (and some Gleam), so hope to fully ditch cursor in favour of Zed soon.


It’s both.

Upbringing, background, mindset, social safety nets (eg knowing that if you fail, you’ll still be fine) — these things are huge and make a huge difference.

But 300k then is about 650k today, and just the time this would buy me alone would mean I’d be able to dedicate my full energy to a few projects that, while I don’t think could ever reach the scale of Amazon, would at least have the potential to make a reasonable return on that initial investment. The 300k is a huge boost that a lot of people don’t have access to.

But you’re absolutely right. If you’re not in the game at all, it’s very difficult to get in, and those other non-financial benefits are a big deal.


But the reality is that all things aren’t equal and you can’t fix all of those things, not in a way that is practical. You’d have to run everything serially (or at least in a way you can guarantee identical order) and likely emulated so you can guarantee identical precision and operations. You’ll be waiting a long time for results.

Sure, it’s theoretically deterministic, but so are many natural processes like air pressure, or the three body problem, or nuclear decay, if only we had all the inputs and fixed all the variables, but the reality is that we can’t and it’s not particularly useful to say that well if we could it’d be deterministic.


It's definitely reachable in practice. Gemini 2.0 Flash is 100% deterministic at temperature 0, for example. I guess it's due to the TPU hardware (but then why other Gemini models are not like that...).


Anyways, this is all immaterial to the original question, which is if LLMs can do randomness [for single user with a given query], so from a practical standpoint the question itself needs to survive "all things being equal", that is is to say, suppose I stand up an LLM on my own GPU rig, and the algorithmic scheduler doesn't do too many out of order operations (very possible depending on the ollama or vllm build).


I also love optimising, especially low level code.


People said similar things about smart contracts, yet here we are, with them being rather niche. I do agree that once the Alexa's and Siri's are LLM powered with MCP (or similar) support, these kinds of use cases will become more valuable and I do feel it will happen, and gain widespread use eventually. I just wonder how much other software it will actually replace in reality vs how much of it is hype.


I strongly believe it also applies to the AI itself.

If the AI wrote the code to the top of its ability, it doesn’t have the capability to debug said code, because its ability to detect and correct issues has already been factored in.

You end up with a constant stream of “I see the issue, it’s: <not the actual issue>”, which is consistent with my experience of trying to have LLM’s debug their own code without basically doing the work myself and pointing them to the specific actual issue.


And the agent beta is looking pretty good, so far, too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: