Hacker News new | past | comments | ask | show | jobs | submit login
Zed AI (zed.dev)
401 points by dahjelle 7 months ago | hide | past | favorite | 280 comments



As I’ve said in yesterday’s thread[0], contrary to many others I find the AI integration in Zed to be extremely smooth and pleasant to use, so I’m happy they’re doubling down on this.

However, personally, I prefer to have it configured to talk directly to Anthropic, to limit the number of intermediaries seeing my code, but in general I can see myself using this in the future.

More importantly, I’m happy that they might be closing in on a good revenue stream. I don’t yet see the viability of the collaboration feature as a business model, and I was worried they’re gonna have trouble finding a way to sensibly monetize Zed and quit it at some point. This looks like a very sensible way, one that doesn’t cannibalize the open-source offering, and one that I can imagine working.

Fingers crossed, and good luck to them!

[0]: https://news.ycombinator.com/item?id=41286612


>However, personally, I prefer to have it configured to talk directly to Anthropic, to limit the number of intermediaries seeing my code, but in general I can see myself using this in the future.

Same. I can kind of feel OK about my code going to Anthropic, but I can't have it going through another third party as well.

This is unfortunately IT/security's worst nightmare. Thousands of excitable developers are going to be pumping proprietary code through this without approval.

(I have been daily driving Zed for a few months now - I want to try this, I'm just sceptical for the reason above.)


Add the following to your settings.json:

  "assistant": {
    "version": "2",
    "default_model": {
      "provider": "anthropic",
      "model": "claude-3-5-sonnet-20240620"
    }
  }
  
Once this is done, you should be able to use Anthropic if you have an API key. (This was available before today's announcement and still works today as of Zed 0.149.3)


That was a part of the reasoning of open sourcing my AI assistant/software dev project. Companies like Google have strict procedures around access to customer data. The same can't always be said about a startup racing to not run out of cash.


I really wanted to try this out with a difficult bug, as I've had Zed installed for a while and haven't actually used it. But I have no idea if I'd get in trouble for that... even though our whole team uses VSCode which I'm sure has scanned all our codebases anyway.


Also, I know some people say “just let me pay for the editor” but I don’t think that’s actually a viable path.

The editor is open-source, and it being open-source is great. Others can contribute, it generally helps adoption, it will probably help with getting people to author plugins, and it means if they go in an undesirable way, the community will be able to fork.

So without making it non-open-source, they’d need to do open-core, which is incredibly hard to pull off, as you usually end up cannibalizing features in the open-source version (even if you have contributors willing to contribute them, you block it to not sabotage your revenue stream).


I would like us as an industry to promote paying for things we use. Crazy idea, I know.

Open source is great and fun and an incredible force multiplier for the world, but when you want to do this stuff for a living, you have to charge money for it somehow, and if you're a software business, and not just software adjacent, it means charging for software.


I’m not sure if that’s clear or not from my comments, but I completely agree (and pay for JetBrains editors, amongst other tools).

Though in the case of “fundamental / ecosystem center” tools like code editors which e.g. others will build plugins for I believe there’s immense value in it being open-source.

Thus, I’m rooting for them to find a business model where they get our money without making it non-open-source.


I don’t think this will ever happen. Software Engineering is fairly unique in that our tools are effectively the same thing we create (code).

If carpenters could build saws out of wood, they would probably make their own tools (and many do, for tools that can be made of wood like hand saws).

You generally have to sell goods to people who can’t make them themselves. How often do carpenters buy shelves instead of making them? How often do constructions workers hire someone else to build a shed for them?

I don’t care for closed source editors in general. My editor and its configuration are one of the most important tools I have, and I don’t want to hand someone the ability to remotely disable it. I would rather go back to using plugin-less vim, because I can at least depend on the tool.

JetBrains is probably the only one I would even consider, and I still don’t care for it (and really hate having a separate editor per-language if I don’t have to).


As long as it is not via subscriptions…


Everyone goes to subscriptions, because trying to get new customers every month at a rate that pays everyone's salaries, at a typical engineering level, plus everyone else on the company, and office rental, doesn't scale, specially in developer tools where many prefer to suffer with lesser tooling than paying.


Subscriptions are only okay for me if it's done like Jetbrains does it, where you can keep the old versions permanently. Partly because it implies what has to be true for monthly payments to make sense: that the software keeps getting better and keeps getting support for various tech advances that happen and tools and databases. If I'm paying monthly for something that doesn't cost them anything then that feels illogical on my side. This should become a thing with online services too, at least let me keep some kind of upgraded version of your service after I stop paying. Something please, anything to make me feel less like a chump for having paid and then ending up with nothing in the end.


Pay-for closed-source is profitable for about a week. Then some asshole offers a shittier but free facsimile and community contributions render it good enough to kill the original.


I think we're all pretty familiar with this cycle, but there do exist durable products that don't succumb to this issue within reason and the burden rests on the original authors to find ways to create themselves a moat like any other business.


A lot of Zed users are developers at medium-to-large businesses, so I think full source-available with licensing is very much something they could do.

They could put the code on GitHub, allowing contributions, with a license that turns into BSD or MIT after two years, and with the caveat that you can only run the (new) code if you purchase a license key first.

In companies of reasonable size, the deterent against piracy is the existence of a license itself, the actual copy protection and its strength aren't as important. The reason those companies (mostly) don't crack software isn't that the software is hard to crack, it's that their lawyers wouldn't let them.

Sure, this would make Zed somewhat easier to crack, but I think the subset of users who wouldn't use a Zed crack if Zed was binary-only but would use one if there was source code available is very small.


But then it wouldn't be open-source anymore, would it? Making the willingness of people to contribute likely much smaller. You don't really have the safety of having an easy way to fork either, if all you can fork is a 2-year old version (which plugins are likely not compatible with anymore).

It would also (deservedly) likely anger a large part of the existing community, especially the ones most involved, like plugin authors, who put their time in assuming their contributing to an open-source ecosystem.

Thus, I believe that ship has sailed the moment they made it open-source in the first place.


My understanding of their comment is that the source is made available immediately. It’s just that you need to pay for a license to use it for the first couple years.


Yes, and open-source and source-available are two different things. The comment I responded to suggested they switch to a source-available license which falls back to an open-source license after a time period passes.


Leave it open source exactly as it is now, do not put convenient downloadable binaries up on Github. Allow me to either compile it myself or pay for the binaries?

There must be some other way to monetize open source.


Selling support or consulting services.

It just takes time to grow that.


I just want a fast programmable text editor with a native GUI and good defaults.

But that seems really tough to find, for some reason.

Zed is so close, but I’d much rather see a focus on the “programmable” part and let the AI and collaboration features emerge later out of rich extensibility (i.e. as plugins, perhaps even paid plugins) than have them built-in behind a sign-in and unknown future pricing model.


In the case of Zed it was always inevitable, a text editor doesn't raise >$10M in venture capital unless there's a plan to stuff it full of premium subscription features.

Warp Terminal is a similar story, >$50M in funding for a terminal emulator of all things...


What I'd give to have a look at the roadmaps to see how they hope to 10x/100x VC investments with a text editor and terminal.


Just look at Postman.


Postman’s success seems almost inevitable in hindsight. Look at how many tutorials involve curl at some point, and Postman was “curl with a GUI”.


The funny thing is Atom was the origin story of Zed, written in some C++ and a lot of Coffeescript exactly so it could be very programmable.

Also, Spacemacs? It's technically a terminal but definitely has a lot of UI features. Very programmable.


> I just want a fast programmable text editor with a native GUI and good defaults.

It's called TextAdept. Much of it is itself built on its own Lua extensibility story, which runs on a fairly compact C core. Both native GUI and terminal versions, using the same user config (keybinds etc). Linux, Mac OS, Windows builds. LSP support built in. Plenty of community-produced extensions around (but of course not as vast a range as VSCode's VSX eco-system furnishes).

https://orbitalquark.github.io/textadept/


Sublime Text is likely closer to what you're looking for.


I retry it every now and then and miss the easy extensibility of Neovim. Can we build something that marries both worlds?


RIP Onivim 2, it was so close to this niche.


By we you mean anybody besides you? Complain-driven development.


Happy to try building one to free you from complaint-driven motivation.


Please do. Editors and IDEs need new ideas.


Is VSCode out for some reason? I don’t care for the defaults for some language plugins, but it seems easier to ship VSCode config than to build a whole editor.

I think the VSCode config is just JSON on disk and I believe you can install plugins on the CLI. You could probably have a bash or Python script do the setup pretty easily.

Edit: responded before coffee, missed the “native GUI”; please disregard


To be fair, you can turn off all the AI stuff with a single config.

"assistant": { "enabled": false, }


It doesn't mean your code is NOT being uploaded somewhere. They could add an easy switch to use the editor 'offline', not that they have to. I'll go back to Helix.


> I just want a fast programmable text editor with a native GUI and good defaults.

What would that be for each OS?

Linux: Kate (at least if using KDE; which one would it be for GTK / Gnome?)

macOS: TextMate?

Windows: Notepad++?


For GNOME, the project's native text editor is: https://gitlab.gnome.org/GNOME/gnome-text-editor

It is significantly less featureful than Kate or your other apps though.


I use Kate on Windows as well.

I really love the Documents Tree plugin and could never go back to old style tabs.


notepad++ without plugins is there for linux

NotepadNext – a cross-platform reimplementation of Notepad++ | Hacker News https://news.ycombinator.com/item?id=39854182


Can also recommend CotEditor for macOS as a Notepad++ replacement. Don't know how "programmable" it is or what even falls under that label.


macOS: BBEdit now and forever.


BBEdit Doesn’t Suck!


> with a native GUI

This means 100x more effort in the long run for a cross platform editor. Maybe if developers lived for 200 years, this could be possible. Will need to solve human ageing problem before the cross platform "native GUI" problem.


For a text editor UI, it really isn't quite so challenging: see Sublime Text, or TextAdept, and others mentioned across this sub-thread.


What are the benefits of having a "native GUI" that terminal interface cannot substitute?

Extensibility of neovim or emacs covers all my text editor use cases.


Neovim and Emacs extensibility are great!

Native GUIs offer far better accessibility (TUIs are not screen-reader accessible, and neither is Emacs' GUI currently), hugely improved UI flexibility and consistent developer APIs (Emacs GUI is inconsistent across platforms and tricky to work with, every Neovim plugin reinvents ways to draw modals/text input because there's no consistent API), reduced redraw quirks, better performance, better debugging (as a Neovim plugin dev I don't want to spend time debugging user reports that relate to the user's choice of terminal emulator this week and not to Neovim or my plugin code).


The monetary incentives are not in your (and my) favor.


If a motivated solo dev thought there might be at least 10,000 people who would pay 100 USD a year for a text editor with better extensibility and performance than VS Code and better defaults/richer GUI APIs than vim/Emacs, I can see why it might be tempting for them to try.


I would also cast my vote for sublime text. The performance is amazing, the defaults are great and the extensions cover a lot of the use cases


What does “native GUI” mean?


I understood it as not a wrapped web application, like Electron or Tauri based applications.


Using the same platform-specific graphics API the OS vendor builds their GUI apps with, ideally, but I'll also settle for "not a TUI, not a web application shipped as a desktop app, even if the OS vendor currently builds their GUI apps as web applications shipped as desktop apps".


A GUI that uses native controls and platform UI conventions with the native behavior expected on the given platform, or a near-indistinguishable equivalent of that.


Not electron


What's your current text editor and what's wrong with it?


If something is "so close," probably just use it.


I don't know of a single modern desktop application that is deploying front-ends simultaneously in WinUI 3, AppKit, and GTK.


Emacs? (I don't have any WinUI 3 machines so can't verify, but does support GTK and AppKit if built with such support).


Not Firefox?


There's two ways to make extensions to Zed, providing context for AI. In the post they show off making a Rust -> WASM based extension and also mention a server based model. There's also a third option -- Zed is open source. You don't have to use their auth, it just makes collaboration easy.


> Extensions can add the following capabilities to Zed: Languages, Themes, Slash Commands

This is a great start but it's far from what most would accept as "programmable" or richly extensible.


we call the latter a "Context Server", basically any process that communicates via JSON-RPC over stdio can do it. documentation for that is here: https://zed.dev/docs/assistant/context-servers


I recently switched from neovim to zed, and I overall like Zed. I miss telescope, and think some vim navigation was better, but I suspect that it has to do with how much effort I put into configuring one over the other, so time will tell.

My biggest gripe was how bad the AI was. I really want a heavy and well-crafter AI in my editor, like Cursor, but I don't want a fork of the (hugely bloated and slow) vscode, and I trust the Zed engineering team much more to nail this.

I am very excited about this announcement. I hope they shift focus from the real-time features (make no sense to me) to AI.


Agreed about the AI, the last time I tried Zed I was also trying Cursor at the same time, and the Cursor AI integration vs what Zed offered was just night and day. So I got the Cursor subscription. But I haven't used it in 2 months (I don't get to code a lot in my job).

This was maybe 3-4 months ago, so I'm excited to try Zed again.


Have you tried using Codeium in Neovim[0]? It may not have all the features shown in this post, but still quite good. I will admit though that I'm enticed to try out AI in Zed now.

[0] https://github.com/Exafunction/codeium.nvim


Codeium works well but I really like the copilot chat plugin as well - it generally does a good job of explaining highlighted code, fixing errors, and other code interactions.


I tried codeium for 6 months and eventually went back to copilot. I think codeium is tier 2


After using Cursor for some hobby stuff, it's really good. I was surprised at how well it managed the context, and the quick suggestions as you're refactoring really add up since they're generally exactly what I was about to do.


I’ve been likewise surprised. The code part is fine; I’m impressed but not “worried it will take my job” impressed.

Where it really shines for me is repetitive crap I would usually put off. The other day I was working with an XML config and creating a class to store the config so it could be marshalled/unmarshalled.

It picked up on the sample config file in the repo and started auto-suggesting attributes from the config file, in the order they appear in the config file, even though the config was camel cased and my attributes were snake cased.

The only thing it didn’t do correctly was expand a certain acronym in variable names like I had done on other attributes. In fairness, the acronym is unclear, which is why I was expanding it, and I wouldn’t be surprised if a human did the same.


Are you referring to cursor or zed here as really good? It’s unclear to me as somebody that doesn’t regularly use either.


The way it reads (to me), it’s about Cursor being good


What were the Zed features that made you to switch? I feel like with todays ecosystem it's easier to complete neovim experience with plugins than wait for Zed devs to catch up.


nvim has a really good cursor integration plugin https://github.com/yetone/avante.nvim


second person to share this with me recently, will have to try it out. Looks pretty alpha, but interesting


nvim has a really good cursor integration plugin https://github.com/yetone/avante.nvim


Interesting that this seems to be the announcement of Anthropic's Copilot alternative:

> A private beta of the Claude 3.5 Sonnet's new Fast Edit Mode, optimized for text editing. This upcoming mode achieves unprecedented speed in transforming existing text, enabling near-instantaneous code refactoring and document editing at scale.


Sounds a lot like existing Claude integration + Caching (as most queries rely on relatively static code context)


its a bit more than that, you can clearly see that the decoding speed is super fast


yeah that caught my eye too, looks to me like speculative editing (they mentioned that its faster to each its input) + prompt-caching it would literally build up on all the tech they have


AI assistants just slow me down. Its a very rare case i find them actually useful. I am generally concerned by the amount of devs that seem to claim that it is useful. What on earth are yall accepting.


I find the only "AI" I need is really just "Intellisense". Just auto complete repetitive lines or symbol names intelligently, and that doesn't even require an AI model.


I'm curious, what kind of work do you do? Does stack overflow slow you down?


Lots of golang kubernetes work these days.

Stackoverflow is used when im stuck and searching around for an answer. Its not attempting to do the work for me. At a code level I almost never copy paste from stackoverflow.

I also utilize claud and 4o at the same time while attempting to solve a problem but they are rarely able to help.


Kubernetes, AWS, Cloudformation and Terraform etc sort of work is still not good with AI.

The current AI code rocket ship is VSCcode + Perl/Python/Node+ReactJS + Co-Pilot.

This is basically a killer combination. Mostly because large amounts of Open source code is available out there for training models.

Im guessing there will be an industry wide standardisation, and Python use will see a further mad rise. On the longer run some AI first programming language and tooling will be available which will have first class integration with the whole workflow.

For now, forget about golang. Just use Python for turbo charged productivity.


> For now, forget about golang

I write kubernetes controllers. Golang is here to stay.

> Just use Python for turbo charged productivity

This is my problem with all the "AI" bros. They seem to consistently push the idea that quickly writing code is the end all of "productivity" its akin to "just shovel more shit faster its great"

Speed != productivity


I have seen several rounds of this over decades. Google will make bad programmers, Perl is a write only language, Node is cancer, Eclipse is slow etc etc.

Eventually you realise you just can't win against better things. These are juggernauts, fighting them is pointless. Because it doesn't matter when you use it or not. Most people will, and make great progress.

You will either be out of the industry or be forced to use it one way or the other.


K8s is probably particularly bad because their package convention basically requires vanity imports, and I would wager the vanity names people choose are wildly inconsistent.

It also doesn’t help that many packages have 3+ “versions” of themselves, so a vanity import named “core” could be v1alpha1, v1beta1 or v1.


Not GP, but the kind of search I do mostly are:

- Does this language have X (function, methods,...) probably because I know X from another language and X is what I need. If it does not, I will code it.

- How do I write X again? Mostly when I'm coming back to a language I haven't touch for a while. Again I know what I want to do, just forgot the minutia about how to write it.

- Why is X happening? Where X is some cryptic error from the toolchain. Especially with proprietary stuff. There's also how to do X where X is a particular combination of steps and the documentation is lacking. I heard to forums in that case to know what's happening or get sample code.

I only need the manual/references for the first two. And the last one needs only be done once. Accuracy is a key thing for these use cases and I'd prefer snippets and scaffold (deterministic) instead of LLMs for basic code generation.


I use llms exactly and exclusively for the first two cases - just write comments like:

// map this object array to extract data, and use reduce to update the hasher

And let llms do the rest. I rarely find my self back to the browser - 80% of the time they spit out a completely acceptable solution, and for the rest 20% at least the function/method is correct. Saved me much time from context switching.


For me the quick refresh is better as I only need to do it once (until I don't use the language/library again) and that can be done without internet (local documentation) or high power consumption (if you were using local models). And with a good editor (or IDEs) all of these can be automated (snippets, bindings to the doc browser,...) and for me, it's a better flow state than waiting for a LLM to produce output.

P.S.I type fast. So as soon as I got a solution in my head, I can write it quickly and if I got a good REPL or Edit-Compile-Run setup, I can test just as fast. Writing the specs, then waiting for the LLM's code and then review it to check feel more like being a supervisor than a creator and that's not my kind of enjoyable moment.


I agree with you, creating something just feels better than reviewing code from a LLM intern ;D

That's why I almost never use the 'chat' panel in those AI-powered extensions, for I have to wait for the output and that will slow me down/kick me out of the flow.

However, I still strongly recommend that you have a try at *LLM auto completion* from Copilot(GitHub) or Copilot++(Cursor). From my experience it works just like context aware, intelligent snippets and heck, it's super fast - the response time is 0.5 ~ 1s on average behind a corporate proxy, sometimes even fast enough to predict what I'm currently typing.

I personally think that's where the AI coding hype is going to bear fruit - faster, smarter, context+documentation aware small snippets completion to eliminate the need for doc lookups. Multi file editing or full autonomous agent coding is too hyped.


I'm just as baffled by the people who use stackoverflow daily. Its increasingly rare that I use it these days, to the point where I deleted my account a few years back and haven't missed it. Don't people read docs anymore? In many ways I feel lucky that I learned at a time when I only had offline docs, which forced me to become good at understanding documentation since its all I had.


To give you some insights from someone with a different starting point:

For context I'm a 22 year old CS student and part-time SRE working on everything related to kubentes (golang, scripting, yaml, ...). I can assure you that reading the fucking manual isn't a thing my fellow students or I did when we could avoid it. I think that might be because university projects don't tend to be long lasting and finding quick pre build solutions - without understanding them - works just fine. There is no penalty for technical debt.

Now almost exclusively read the primary docs or code and I think that might (surprisingly?) be because of copilot.

The neovim copilot extension resulted in me not feeling the need to switch to my browser all the time. Not having to do this context switch and looking more at my code got me into reading the lsp provided symbol docs. After some time I noticed that copilot just made me feel like I know what I'm doing and reading the overlay docs provided a way deeper understanding.


Thanks for the perspective!

Do you think this helps or hinders your ability to internalise the information (ie so that you don’t need to look it up, in the browser or from the LSP)?

For me, I feel that documentation is a starting point, but the goal is always to not need to look it up, after a little ramp up time.

With that said, I do use ChatGPT as a replacement for documentation sometimes, asking it how to do things instead of looking it up, but again the goal is to internalise it rather than to rely on the docs or tools. I won’t shy away from reading primary documentation, though, when necessary.


> Do you think it [copilot?] helps or hinders...

It showed me some nice shortcuts (quick anon js functions and the like) which I will be using in the future, but I noticed that I didn't remember multi step code flows. For example while trying to get the response from an http request in go, there is a chain of calls which you will most likely follow. Building the client > making the request > checking the response code > reading the body > maybe parsing the body if it's structured text. I had written this kind of flow hundreds of times while having copilot running, and I still could not write it myself - I just had this abstracted idea of what's happening, but no memory of the syntax.

> as a replacement for documentation

I feel like they are too focused. And not having to go through the docs to find the piece I'm searching for results in me missing out on important context / possibly even better ways to solve my problem.


Do you use stack overflow for every keystroke?


At minimum 6 stack overflow searches per keystroke.


Did Google make you less productive when looking up about code? If it did not then I dont see how looking up with AI can be worse.


> If it did not then I dont see how looking up with AI can be worse

Looking up with AI is worse because its WRONG a lot more. Random rabbit holes, misdirection. Stuff that SOUNDS right but is not. It takes a lot of time and energy to discern the wheat from the chaff.

Sure you can find misleading or outdated blogposts or forum discussions with a google search but the information is far more grounded in correctness then anything from an LLM.


This is my experience with trying to use AI for coding tasks too. I've had back-and-forths with AI that involve me trying to get it to fix things to get a final working function, but since it doesn't actually understand the code, it fails to implement the fixes correctly.

Meanwhile, the stuff you find through a traditional web search tends to either be from a blog post where someone is posting actual working code snippets, or from StackOverflow where the code tends to be untested initially but then gets comments, votes, and updates over time that help boost confidence in the code. It's far more reliable to do a web search.


why do people pretend that google search is straight forward

> the stuff you find through a traditional web search tends to either be from a blog post

Is that so? Most of my hits have been stack overflow and github issues, where there are false positives, same problem to AI hallucination


Because it is? I tend to do my queries as keywords instead as questions and I tend to get good result. But most of the time, I'm just seeking the online manual to understand how things works and what is happening, not an exact solution. It's the equivalent of using a library to write a thesis. That only requires to get familiar with the terminology of the domain, know where the best works are and how to use indexes and content tables.


>>Sure you can find misleading or outdated blogposts or forum discussions with a google search but the information is far more grounded in correctness then anything from an LLM.

This was the case only 2 - 3 months back. But the assistants all moved to GPT-4/Sonnet and the newer versions are just a whole lot better and accurate.

That's the whole idea behind AI, when you do find something is wrong, the error function kicks in and the weights are tweaked to more correct values.

When GPT-5 comes along it will be another whole level accurate. In fact its already close to 90% accurate for most tasks with GPT-5 you can say that number could go to 95% or so. Which is actually fairly good enough for nearly all the production work you could do.

Of course in coming years, Im guessing coding without AI assistance will be somewhat similar to writing code on paper or something like that. You can still do it for fun, but you won't be any where productive at a job.


I use gpt4o and sonnet regularly. They are so often wrong. Just yesterday gpt4o spit out consistently incorrect tree sitter queries and refused to accept it was wrong. Its all so pointless and slowed me down compared to just reading the documentation.


What’s the point of this argument? If the user you’re replying to has been on this site, they’ve probably seen this counterpoint before.

“Aha!” They say, “I only realized my folly after the 25th time someone pointed out googling also takes time!”

Maybe there’s some interesting difference in experiences that shouldn’t just be dismissed.


What’s the point of this argument? Maybe I have heard people bashing AI... 26 times?


Some of Cursor's features is appeals to my lazyness, say: "convert to javascript" and hit apply... For now its still a bit slow (streaming words) but when this is immediate? Not a change against the fastest Vimmer. Select code, dictate the change, review, apply - will save my wrists.


The main issue with these set of tools is that I mostly read and understand code more than writing it myself.

Not enough attention is been given to this imbalance.

It is impressive having an AI that can write code for you, but an AI that helps me understand which code we (as a team) should write would be much more useful.


My immediate instinct was to agree with you whole-heartedly. Then I remembered that Copilot has an "Explain this" button that I never click.

Maybe this is because I'm just not used to it, maybe the workflow isn't good enough, or maybe it's because I don't trust the model enough to summarize things correctly.

I do agree that this is an area that could use improvement and I see a lot of utility there.


My issue is not explaining how the code work. My issue is understanding why the code exists at all.

The default workflow is to follow the commit history until I don't get to where and when the code in it's current shape was introduced. Then trying reading the commit message that generally link to a ticket and then acquire from tribal knowledge of the team why it was done like that. If it is still necessary, what can we do today instead, etc...

And similarly when designing new code that needs to integrate on existing piece of code... Why there are such constraints in place? Why was it done like that? Who in the team know best?


AI could help with this. It could pull the version history for a chunk of code you have highlighted, pull up the related tickets, sift through them, and then try to summarize from commit messages and comments on the ticket why this was added.

What I could have used the other day was "find the commit where this variable became unused". I wanted to know if the line that used it was intentionally deleted or if it got lost in a refactor. I eventually found it but I had to click through dozens of commits to find the right one.


I get the frustration with this workflow too. This is where having all of your issues in Git would be great, but alas no one wants to collaborate in ticket comments via Git commits...

Inevitably the issue tracker gets moved/replaced/deprecated, and so all of that context disappear into the void. The code however is eternal.


This is great feature, but targeted mostly at junior developers who don't yet understand how particular library or framework work. But when I read the code, I spend most of my effort trying to understand what did original developer meant by this, and LLMs are not yet very helpful in that regard.


I partially agree with you: not knowing what particular functions etc. do is one use case, another for me would be to detangle complicated control flow.

Even if I can reason about each line individually, sometimes code is just complicated and maintains invariants that are never specified anywhere. Some of that can come down to "what were the constraints at the time and what did we agree on", but sometimes it's just complicated even if it doesn't have to be. The latter is something I would love for a LLM to simplify, but I just don't trust it to do that consistently correct.


I was trying to express exactly what you say!

Sorry if it wasn't clear.

Not only the intention, but also the reason behind it.


There are a lot of tools that promise to help you understand existing code. However, it's a really hard problem and to me none of them is production ready.

Personally, I think the problem is that if the AI got it wrong, it would waste you a lot of time trying to figure out whether it's wrong or not. It's similar to outdated comments.


Sounds like you’re better helped by something like codescene?


No one should be submitting code that's difficult to understand, regardless of how it was 'written'. This problem exists just the same as a developer who's going to copy large blocks of StackOverflow without much thought.


It isn't always the piece of code that is hard to read. It's about how it fits into the 200,000+ line application you're working with, what it was trying to solve, whether it solved the right problem, and whether there will be unexpected interactions.


> No one should be submitting code that's difficult to understand

what? ok. Nice ideals.


Hrm. Still not quite what I crave.

Here's roughly what I want. I want to be able to highlight some block of code, ask the AI to modify it in some way, and then I want to see a diff view of before/after that lets me accept or reject changes.

LLMs often get code slightly wrong. That's fine! Doesn't bother me at all. What I need is an interface that allows me to iterate on code AND helps me understand the changes.

As a concrete example I recently used Claude to help me write some Python matplotlib code. It took me roughly a dozen plus iterations. I had to use a separate diff tool so that I could understand what changes were being made. Blindly copy/pasting LLM code is insufficient.


"Here's roughly what I want. I want to be able to highlight some block of code, ask the AI to modify it in some way, and then I want to see a diff view of before/after that lets me accept or reject changes."

That's exactly what this new set of Zed features lets you do.

Here's an animated GIF demo: https://gist.github.com/simonw/520fcd8ad5580e538ad16ed2d8b87...


> I want to be able to highlight some block of code, ask the AI to modify it in some way, and then I want to see a diff view of before/after that lets me accept or reject changes.

If you squint, that's the same as using an IDE with first class git support and co-editing with a (junior) pair programmer that commits each thing you ask them to do locally, or just saves the file and lets you see stageable diffs you can reject instead of push.

Try the /commit workflow using aider.chat as a REPL in your terminal, with the same git repo open in whatever IDE you like that supports real time git sync.

The REPL talks to you in diffs, and you can undo commits, and of course your IDE shows you any Aider changes the same as it would show you any other devs' changes.

That said, I use Zed and while it doesn't have all the smarts of Aider, its inline integration is fantastic.


Hey, this is Nate from Zed. Give the inline assistant a try. Here is a little demo: https://share.cleanshot.com/F2mg2lXy

You can even edit the prompt after the fact if the diff doesn't show what you want and regenerate without having to start all over.


Ah interesting. I missed that when browsing the page.

Can you make the diff side-by-side? I’ve always hated the “inline” terminal style diff view. My brain just can’t parse it. I need the side-by-side view that lets me see what the actual before/after code is.


Haha I actually agree – I always prefer them side by side.

We don't have side-by-side diffs yet, but once we do I'll make sure it works here :)


> I want to be able to highlight some block of code, ask the AI to modify it in some way, and then I want to see a diff view of before/after that lets me accept or reject changes.

Zed does that - here's a clip of it on some Python code:

https://youtu.be/6OdI6jYpw9M?t=66



FWIW, that's what cursor does: https://www.trycursor.com/


Excited to try Zed, but FWIW, this is exactly Cursor's default behavior if you use the side chat panel


I'm more in this camp:

> Add build time options to disable ML/AI features

https://github.com/zed-industries/zed/issues/6756

Just give me a good editor.


Anthropic is an evil company. I wanted to try their subscription for a month, now they are storing my credit card info forever without an option to remove it, yet a single click of a button will instantly resubscribe me. I don't understand how one can seriously think they will not sell all your data at the first opportunity to make money, cause with such shady subscription/payment practices it instantly gives a money-before-all vibes for the whole product and the company.


Fairly certain selling people's credit card data is illegal and something Anthropic will not be doing.


I was not telling about the credit card data obviously, they have much more data they can actually sell.

In my opinion, storing someone's credit card data online after purchase, without a clear option to delete it should be illegal.


GDPR them, if you're in the EU. Or just pretend you are and see if they flinch.


This looks cool!

Feature requests: have something like aider's repo-map, where context always contains high level map of whole project, and then LLM can suggest specific things to add to context.

Also, a big use case for me, is building up understanding of an unfamiliar code base, or part of a code base. "What the purpose of X module?", "How does X get turned into Y?".

For those, its helpful to give the LLM a high level map of the repo, and let it request more files into the context until it can answer the question.

( Often I'm in learning mode, so I don't yet know what the right files to include are yet. )



Do you use all of these at the same time? If you'd pick just one or two: Which and why?


Its a fast graphical editor. Not sure ?


I hope that in a few years we look back at this era of "prompt an LLM for a shell command and instantly run whatever it spits out by pressing enter" with collective embarrassment.


What does this mean?


People that don't like Bash|Zsh|... and are afraid of ImageMagick|ffmpeg|curl|...'s manuals and want AI to generate the perfect script for them.


Why not both? What do you see wrong with having gpt spit out a complicated ffmpeg filter chain and explaining every step of the output, which you can then look up in the docs and unit test before implementing? I find verifying the gpt output is still quicker than consulting giant wall of text man pages


Fish and learning to fish. The latter widens your perspective and give you more options, especially if there are a lot of rivers nearby!


You are saying one should learn to fish with a spear not a fishing rod


Virsh has a --help screen that shows a command listing that is 20000 characters long. I too would be intimidated.


Just looking at the "inline transformations animation"...

How is typing "Add the WhileExpression struct here" better or easier than copy/pasting it with keyboard and/or mouse?

I want something that more quickly and directly follows my intent, not makes me play a word game. (I'm also worried it will turn into an iterative guessing game, where I have to find the right prompt to get it to do what I want, and check it for errors at every step.)


Ive been using Zed's assistant panel heavily and have really enjoyed the experience. The UI can be a bit frustrating. Sometimes, when you write it's hard to get it to send your query. The new /workflow seems to really bridge the last gap to effectively edit the parts that im asking for help with changes.

I'm already paying for OpenAI API access, definitely gonna try this


It's the opposite for me ahaha. I'm very excited for Zed as a performant and powerful text editor, an "updated + open-source sublime text" if you will. But I have absolutely no interest in AI and copilot and github integrations and whatnot.

This is not a criticism of zed though, I simply have no interest. Much the contrary: I can only praise Zed as to how simple it is to disable all these integrations!



Same here. If I found Zed now I would probably avoid it, but having tried it already I'm glad to have it, and will just turn off the intrusive bits.


>Claude 3.5 Sonnet's new Fast Edit Mode, optimized for text editing. This upcoming mode achieves unprecedented speed in transforming existing text, enabling near-instantaneous code refactoring and document editing at scale

I wonder what this is. Have they finetuned a version which is good at producing diffs rather than replacing an entire file at once? In benchmarks sonnet 3.5 is better than most models when it comes to producing diffs but still does worse than when it replaces the whole file.


It seems like it just "echoes" back the unchanged contents? https://x.com/zeddotdev/status/1825967818329731104


Not releasing a cross-platform code editor on the dominant OS seems quite weird in my opinion. (I know they plan to do it, but as someone who has built cross-platform apps, this is not rocket science to have Win32 support from the start.)


Anecdotally, I have never seen Windows widely used for development, outside of .NET shops (but that shouldn't be a surprise).

Moreover, there's plenty of quirks Windows has with respect to:

- Unicode (UTF-16 whereas the world is UTF-8; even Java uses UTF-8 nowadays, so it's only Windows where UTF-8 is awkward to use)

- filenames (lots of restrictions that don't exist on Mac or Linux)

- text encoding (the data in the same string type changes depending on the user's locale)

- UUIDs (stored in a mixed-endian format)

- limit of open files (much lower than Mac and Linux; breaks tools like Git)

If you write software in Java, Golang, or Node.js, you'll quickly encounter all of these issues and produce software with obscure bugs that only occur on Windows.

I'm not sure about Rust, but using languages that claim cross-platform support isn't enough to hide the details of an OS.

In every job I've had, the vast majority of devs were on Mac OS, with Linux coming in at a close second (usually senior devs). So I wasn't surprised Zed first supported Mac then Linux. Windows support is nice for students, game developers, and people maintaining legacy .NET software, but those people won't be paying for an editor.


This comment is not correct.

Java uses mixed Latin1/UTF-16 strings. The Latin1 mode is used for compact storage of alphanumeric text as the name suggests: https://github.com/openjdk/jdk/blob/1ebf2cf639300728ffc02478...


The internals of OpenJDK may use either Latin1 or UTF-16, but the Java API, as of Java 18, defaults to the UTF-8 character set for string operations and text codecs.[1] Just like the standard APIs of Mac OS default to UTF-8 text encoding even though CFString internally uses various different representations to optimize memory usage.[2]

[1] https://docs.oracle.com/en/java/javase/18/docs/api/java.base...()

[2] https://developer.apple.com/library/archive/documentation/Co...


The macOS reference is irrelevant to Java here.

The charset defines the encoding which applies to first and foremost I/O behavior on how it treats otherwise untyped stream of bytes that are being converted to or from (UTF-16) text as stored by Java.

https://openjdk.org/jeps/400 is yesterday's news and something that .NET has been doing since long time ago (UTF8 encoding is an unconditional default starting with .NET Core 1.0 (2017)).

Whether Win32 APIs take UTF-8 or something else (well, it's usually ANSI or UTF-16) is something for the binding libraries or similar abstraction packages for a language of choice to deal with, and has rather minor impact on the overall flamegraph if you profile a sample application.

I find it strange having to defend this, but the UTF-8 vs UTF-16 argument really has no place in 2024 as dealing with popular encodings is as solved problem as it gets in all languages with adequate standard library.


- Why is unicode an issue? The editor should use UTF-8. I don't think its forced to follow the Windows standard if they have their own font rendering? - Filenames is an issue, but it shouldn't be too hard to pull in a path library to do most of that for you. Which is why you should code for Windows from the start because you will quickly realize, oh yeah, maybe I couldn't hardcode forward slashes everywhere because it's going to be a pain to refactor later. And oh yeah, maybe case-insensitivity is a thing that exists. - Text encoding..? Just don't do that. Encode it the same way. Every modern editor, including Notepad, can handle \n instead of \r\n now. You needn't be re-encoding code that was pulled down from Git or whatever. - Also don't see why UUIDs are relevant - Limit on open files is probably a legit issue. But can be worked around of course.

Anyway, none of these sound like major hurdles. I think the bigger hurdles are going to be low-level APIs that Rust probably doesn't have nice wrappers for. File change notifications and... I don't know what. Managing windows. Drivers.


Everywhere I've worked, most users (including devs) are on Windows. Windows is the major OS in enterprise environments, it's a huge mistake to not support it imo.


Anecdotal evidence. Although I agree about OS support parity, because OS religiosity is generally dumb.


None of that matters in practice, because in this context it would be trivial to solve with a correctly built OS abstraction layer.

And Windows is by large the development platform of choice for any serious gamedev work.


For those that want to try it, and have a rust development environment installed, the following runs Zed on Windows. (The editor part at least, I haven't tried the collaborative functions).

  git clone https://github.com/zed-industries/zed
  cargo run --release


It's especially weird they released linux (with x11 AND wayland backends) before windows.


* Both Linux and MacOS are unices, there is less effort.

* The framework they use supports X11 and Wayland out of the box, it wasn't as much effort as you'd think.

* They accept contributions.


For the kinds of devs they've hired, Windows is the third most common development or production environment. I've worked at startups like this for 20 years and haven't touched a windows system.

I realize yall are out there, but from where I'm sitting, this isn't odd at all. They're likely most familiar with and using Unixes.


Give them time, developing cross-platform GUI apps was never that simple


I like the idea of Zed, and I recently went editor hopping. I installed Zed but was immediately hit with "your gpu is not supported/ will run incredibly slow" message. Gah...


In my case this pointed out a problem with my NVIDA drivers that I didn’t know about. Once I fixed that issue my whole KDE system ran much faster and allowed Zed to run


In my case:

CPU: 12th Gen Intel i7-1255U (12) @ 4.700GHz

GPU: Intel Device 46a8


I don't mind them having AI features, but I wish they'd fix some of the most basic performance issues considering that their entire reason to exist was "performance".

You know, things like not rerendering the entire UI on the smallest change (including just moving your mouse) without damage reporting.


Sorry if this comes off as entitled, I'm honestly just a bit confused by the following.

I have no experience using (current) vscode, but I've used neovim on a daily basis for a couple of years. I think the thing which makes an editor a "better editor" are the small things, things which solve problems which might cause a little friction while using the editor. Having a lot of these little points of friction results in a (for me) annoying experience.

Zed has a lot of these (from the outside) simple issues and I don't see them working on them. Again, I understand that they have to prioritize. But this doesn't result in me feeling comfortable spending time adopting this editor. I'm "scared" that issues like https://github.com/zed-industries/zed/issues/6843 might be very low on the list of work being done and always will be, while the next big (maybe honestly great) feature gets all the attention.


Tried Zed AI for a bit as a heavy user of Cursor, a few thoughts - I like that they are trying something different with the assistant panel view by providing end users full control of the context as opposed to Cursor's "magic" approach. There is a huge tradeoff between the granularity of control and efficiency however. The friction for manually filling in context for the assistant window might repel devs from using it constantly. - Zed AI is still missing a lot of UX details in terms of their inline assistant capabilities. e.g. pressing ctrl+? for inline assist only selects the current line, and users will have to manually select a block of code for inline assist, which is really annoying. In cursor, cmd+k automatically selects the surrounding code block - Definitely a huge plus that we get to choose our own LLM providers with Zed AI.


I think Zed is starting with a more transparent elegant foundations and then they'll build in more optional magic from there. For example, they're working on automatic codebase RAG


At some point an AI first programming language will have to come along which will integrate well with the AI models, Editor and Programmer input seamlessly.

Im not sure what that is, but Im guessing it will be something along the lines of Prolog.

You will basically give it some test cases, and it will write code that passes those test cases.


I just want a perplexity-style agentic integration that researches dozens of pages first, does internal brainstorming before printing output in my editor.

I just had a many-hour long hacking session with Perplexity to generate a complex code module.


I've been trying to mainline Zed for the past few months...and overall I really do like it - but there are enough quirks/bugs that make me frustrated.

A simple example: Something as simple as the hotkeys for opening or closing the project panel with the file tree isn't consistent and doesn't work all the time.

To be clear: I am excited about this new addition. I understand there's a ton of value in these LLM "companions" for many developers and many use cases, and I know why Zed is adding it...but I really want to see the core editor become bullet proof before they build more features.


You can do this with aider in any IDE:

https://aider.chat/

I think the focus on speed is great, but I don't feel my IDE's speed has held me back in a decade.


Anthropic working with Paul Gauthier and Zed being aider-aware would be phenomenal. He's been working this for a while:

https://news.ycombinator.com/item?id=35947073

When familiar with Aider, it feels as if this Zed.ai post is chasing Paul's remarkably pragmatic ideas for making LLMs adept at codebases, without yet hitting the same depth of repo understanding or bringing automated smart loops to the process.

Watching Aider's "wait, you got that wrong" prompt chains kick in before handing the code back to you is a taste of "AI".

If your IDE is git savvy, then working with Aider in an Aider REPL terminal session with frequent /commits that update your IDE is like pair programming with a junior dev that happens to have read all the man pages, docs wikis, and stackoverflow answers for your project.


What has slowed me down is all the garbage pop ups and things that get in my way. Every time I open VSCode it tries reconnecting to some SSH I had open before it lets me do anything. And god forbid I have two different workspaces. The constant "what's new" and "please update me now"s don't help either.

I love IntelliJ but it does not start up quickly, which is a problem if I just want to look at a little code snippet.


Let me be very direct - what's the strength over the competition, e.g. Cody? The fact that it's its own text editor? I'm seeing the assistant emphasized but that just looks like Cody to me.


Agreed and Cody has recently upped their free tier and Sonnet 3.5 can be used for free for completions and up to 200 chat messages per month. Plus you can use it in VS Code and IntelliJ - no need to learn a new text editor.


Missing from this announcement is language around Privacy. Cursor for example has a Privacy Mode that promises not to store code, and this seems like a critical feature for any AI enhanced dev tools.


It’s great news that they provide it for free. It’s hard to subscribe to all the LLM providers. Even with a pro subscription, you need to buy credits to be able to use with the editors, which gets very expensive if you use them a lot.

On another side, I really like the experience of coding with GitHub Copilot. It suggests code directly in your editor without needing to switch tabs or ask separately. It feels much more natural and faster than having to switch tabs and request changes from an AI, which can slow down the coding process.


Has any long-term Emacs user delved into Zed and ported the cool features yet?

Don't take it as sarcasm, I am genuinely interested. I think Emacs' malleability is what still keeps it alive.


For real-time collaborative editing, consider experimenting with https://github.com/zaeph/crdt.el


What are the cool Zed features? Also genuinely interested.


In my understanding Zed is "Figma for code". Huge focus on collaboration (hence the slogan "multiplayer code editor") and AI.

It's hard for me to understand what text editor itself has to do with LLM completions.


Long-term Emacs user here: I actually just switched entirely to Zed.


What made you switch?


I had hope Zed would be a good editor for junior developers, but that ship apparently has sailed, and it's destination isn't where we need to go.


Seems like a non-sequitur? Why does LLM integration mean that Zed is less good of an editor for junior devs?


Right, LLM integration should be a huge boon for junior developers.


Not sure if it's what GP is talking about - but haven't you noticed how many juniors seem to be shooting themselves in the foot with LLMs these days, becoming over-reliant on it and gaining expertise slower?


This is exactly what I'm talking about. Ever since LLMs took over, I've noticed an uptick in my fellow senior developers complain about the quality of work, but I've also seen a huge increase in the poor quality of PRs to open source projects.

Like, normally my primary complaint about LLMs is their copyright violating nature, and how it is hanging everyone who has ever published the output of an LLM out to dry, but unless LLM output improves, I think it may end up being a non-issue: their use will die out, and every AI startup will die, just like all the Alt-coin startups did.

Want to change my mind on LLM quality? Have it produce code so good that I can't tell the difference between an inexperienced developer (the kind that would be hired into a junior role, not the kind that would be hired for an internship) and the output of this thing.


Here's one of my best documented examples: https://simonwillison.net/2024/Apr/8/files-to-prompt/

And a more recent example (writing me a Django app): https://simonwillison.net/2024/Aug/8/django-http-debug/#clau...


I've noticed the opposite: people who had never even started learning to program who are now getting stuck in because they don't have to wade through six months of weird error messages about missing semicolons to get started.

But I guess I'm talking about complete newbie developers, which is a different category from junior developers.

I have 25+ years experience at this point so I'm pretty far removed from truly understanding how this stuff benefits newcomers!


Are you talking specifically about the general UX? On initial glance it does look a little like there is a bit more of a learning curve to navigate.


Is all the overhead required to use the AI features easily disabled with a feature flag such that zero CPU cost and zero network transmission occurs?



I wonder if there's already a solution that allows me to ask questions about local codebases. e.g. how does this subsystem work.


This feature does exactly that. You can open up the chat panel, run "/tab name-of-tab" or "/file path-to-file" and then start asking questions about the code.


Hey! I'm Nate from Zed. You can also use the /file command to drop entire directories, or even globs into the assistant.

For example, you can do /file *.rs to load all of the rust files in your project into context.

Here is a simple but real example I used a while back:

"/file zed/crates/gpui/src/text_system.rs

I have a font I want to check if it exists on the system. I currently have a &'static str.

Is there something in here that will help me do that?"

I haven't interfaced with the lower level TextSystem that much, so rather than dig through 800 lines of code, I was able to instantly find `is_font_available()` and do what I needed to do.


Loading a large number of files into context uses up a lot of the context length and may often even exceed the context length of the LLM. Is there any RAG-like feature planned so that only relevant parts of the code are loaded?


Meanwhile html tags in JSX/TSX files still do not autocomplete/close. Speaking as someone who used Zed for nearly 7 months, it seems like should be prioritizing features that will make the editor more usable. I’d be excited to go back to Zed, but the issues drove me to Neovim.


Until the AI means “system thinking capability that can analyze the codebase and give real suggestions” I don’t buy it. Everything I have seen so far is waste of my time and resources and at best is useful for generating tests or docstrings.


Hey Zed team, just one little nitpick about the page. I love the keyboard shortcuts at the top for the download page and login. However, when I try to Ctrl-L and select the url, it triggers the login page shortcut.

Brave Browser Windows 10


Thanks for the report, this should be fixed now!


Yep fixed!


Can you plugin other LLMs Ala: https://dublog.net/blog/open-weight-copilots/


The Zed configuration panel includes tools for adding an Anthropic API key, a Google Gemini API key, an OpenAI API key or connecting to a local instance of ollama.


I found zed to be pretty unusable. I don't use AI features but I'd love to replace vscode anyway. I just need an editor that actually works first


I’d be curious how this compares to supermaven, my current favourite AI autocomplete.


Add `"features": {"inline_completion_provider": "supermaven"}` to your Zed settings and you can have the best of both worlds :D


What's the difference between VSCode (with co-pilot), Zed, & Cursor?


Cursor is a fork of VSCode with code AI that, in my oppinion, better than Zed and other competitor because they implement the MODIFYING existing code workflow better. Most other code AI products are only good at code generation or being a better stack overflow. I don't use Copilot to tell, does it show you diff like Cursor when modifying code?


Biggest thing missing is templates that are available in vscode.


First the unsolicited package installation controversy now they jumped onto the AI bandwagon. Is this a speedrun attempt at crashing a newly created company?

What's next? Web3 integration? Blockchain?


am Cursor main, dont really have any burning pains that make me want to change tools but open to what I dont know.

Zed vs Cursor review anyone?


- open-source vs closed fork of vscode

- transparent assistant panel vs opaque composer. you control your own prompts (cf. [0])

- in Zed the assistant panel is "just another editor", which means you can inline-assist when writing prompts. super underrated feature imo

- Zed's assistant is pretty hackable as well, you can add slash commands via native Zed extensions [1] or non-native, language-agnostic Context Servers [2]

- Zed's /workflow is analogous to Cursor's composer. to be honest it's not quite as good yet, however it's only ~1 week old. we'll catch up in no time :)

- native rust vs electron slop. Zed itself is one of the larger Rust projects out there [3], can be hard to work with in VS Code/Cursor, but speedy in Zed itself :)

[0]: https://hamel.dev/blog/posts/prompt/

[1]: https://zed.dev/docs/extensions/slash-commands

[2]: https://zed.dev/docs/assistant/context-servers

[3]: https://blog.rust-lang.org/inside-rust/2024/08/15/this-devel...


Two areas where I think Zed might fall behind: Cursor Tab is REALLY good and probably requires some finetuning/ML chops and some boutique training data.

For composer, there's going to be more use of "shadow workspace" https://www.cursor.com/blog/shadow-workspace to create an agentic feedback loop/ objective function for codegen along with an ability to navigate the language server and look up definitions and just generally have full context like an engineer. Are there plans for the same in zed?

Also, cursor has a model agnostic apply model, whereas you all are leaning on claude.


Cursor is electron/vscode based. Zed uses a custom built rust UI and editor model that gives 120fps rendering. (Or was it 60fps)

It is really smooth on a Mac with ProMotion.


Hey! I'm Nate from Zed. There are a lot of questions about this, here are some quick thoughts...

Cursor is great – We explored an alternate approach to our assistant similar to theirs as well, but in the end we found we wanted to lean into what we think our super power is: Transforming text.

So we leaned into it heavily. Zed's assistant is completely designed around retrieving, editing and managing text to create a "context"[0]. That context can be used to have conversations, similar to any assistant chatbot, but can also be used to power transformations right in your code[1], in your terminal, when writing prompts in the Prompt Library...

The goal is for context to be highly hackable. You can use the /prompt command to create nested prompts, use globs in the /file command to dynamically import files in a context or prompt... We even expose the underlying prompt templates that power things like the inline assistant so you can override them[2].

This approach doesn't give us the _simplest_ or most approachable assistant, but we think it gives us and everyone else the tools to create the assistant experience that is actually useful to them. We try to build the things we want, then share it with everyone else.

TL;DR: Everything is text because text is familiar and it puts you in control.

[0]: https://zed.dev/docs/assistant/contexts.html

[1]: https://zed.dev/docs/assistant/inline-assistant

[2]: https://zed.dev/docs/assistant/prompting#overriding-template...


Hey! I really see the power in Zed and the extensibility and simplicity. Great approach.

I posted this above, but want you to see it:

Two areas where I think Zed might fall behind: Cursor Tab is REALLY good and probably requires some finetuning/ML chops and some boutique training data.

For composer, there's going to be more use of "shadow workspace" https://www.cursor.com/blog/shadow-workspace to create an agentic feedback loop/ objective function for codegen, along with an ability to navigate the language server and look up definitions and just generally have full context like an engineer

Also, cursor has a model agnostic apply model, whereas you all are leaning on claude.

Any plans to address this from the core team or more of a community thing? I think some of this might be a heavy lift

I really like the shared context idea, and the transparency and building primitives for an ecosystem


Thanks, I'll take a look at these. We aren't done–A good amount of the Zed team uses our assistant heavily every day so we'll continue to refine it.

I shared this with the team. I need to spend some time in Cursor to understand their mental model, it seems a lot of folks have come to enjoy using it.

We do also have extensibility planned for the assistant, you can see a taste of it in the slash command code if you want to check it out.

AFAIK you can use `/workflow` with some other models other than Claude, but I can't speak to which off the top of my head.


Great. They also have other blog posts with product roadmap and research directions https://www.cursor.com/blog/problems-2024 you should look at, and an AI engineer conference talk which goes into their vision: https://youtu.be/6g28WpZbF1I?si=MkHgxMDMjxxOHtcZ


How do I enable Zed AI?

I'm logged in, using Zed Preview, and selecting the model does nothing. In the configuration it says I "must accept the terms of service to use this provider" but I don't see where and how I can do that.


I'm having the same problem.


Ditto.

EDIT: figured it out. My Zed settings was broken, because it tries to create a settings file in "~/.config/zed", and my entire "~/.config" directory is symlinked somewhere else, so macOS permissions broke Zed's ability to create that config file. So I gave Zed full-disk access in the macOS "Privacy & Security" settings, Zed is now able to make a config file (which is where the model is set when you choose one), and everything is hunky dory.


They just launched it a few minutes ago. It's going to be a a while for a good review.


excited to use zed, but the AI chat panel does not work on a Mac host connected to a Linux guest.


I've been playing with it this morning, and fuck me this is awesome. I finally feel that an LLM is being actually useful for coding. I still have to check everything, but it's like having a talented junior helping me.


Oh, they changed their website. It's stunning!


JetBrains ships a local, offline AI auto-complete in PyCharm. Sure it's limited, but it shows it can be done. It's a pity other companies aren't trying the same, especially this one that boasts about millisecond editing times.

Edit: JetBrains, not IntelliJ. Auto-complete details - https://blog.jetbrains.com/blog/2024/04/04/full-line-code-co...


Zed will hook into ollama as well; so you can run your own locally; just not as part of the same process (yet?).


Hooking into standard APIs like Ollama is definitely preferred (at least to me) because it means it’s more composable and modular. Very Unix-y. Lets you use whatever works for you instead of the vendor.


I tried it with C#. It's not comparable to Copilot.

If I've never used Copilot I might be slightly impressed.


You're right, it's not comparable. Jetbrain's code completion actually integrates with the IDE to guarantee the suggested code isn't entirely a hallucination -- for example it checks for unresolved references so that its suggestions don't reference non-existent variables/methods.

I've disabled Copilot as it produces garbage way too often. I found myself "pausing" to wait for the suggestion that I'd ultimately ignore because it was completely invalid. I've left Jetbrain's code completion on though because it's basically just a mildly "smarter" autocomplete that I'll occasionally use / I don't find myself relying on.


I don’t see programming getting exponentially harder. So eventually local compute will catch up with programming needs and offline AI will be pretty good.


Local compute is also getting progressively harder to make faster. I have a CPU from 6 years ago and checked some benchmarks to see if upgrades would be worth it. It's a 90% increase in performance with a 27% improvement in single threaded. Pretty substantial but not exponential. GPUs are advancing more quickly than CPUs right now but I wouldn't be surprised if they hit a wall soon.


The local compute problem is a cost problem, and most people do not have the hardware necessary and judging by trends this will continue for another ~decade.


For those who don't have the latitude of uploading their code to Microsoft servers, smaller, limited but fully local AI is better than nothing.


I agree that the completions might not be that great, but for context: this is a 100M model, while most models that people compare this to are atleast 100x bigger.

They also focus on single line completions, and ship different models per programming language. All these make it possible to ship a decent completion engine with a very small download size.


Interesting, have you used it/found it to be usable? I use IntelliJ myself, but it's not known for being a lean beast. With the addition of stuff like test containers and a few plugins, I'd be suprised if my machine didn't melt adding a local LLM too.


I mainly use PyCharm and I found the auto-complete to be good. It doesn't always kick in when I expect but some of the suggestions have been surprisingly complex and correct.


Maybe its because there's literally no point in using a local llm for code-completion. You'd be spending 90% of your time correcting it. Its barely worth it to use co-pilot.


AI autocomplete is just a toy, it has been there since the begining of AI code and I find it pretty useless. I prefer making a proper prompt to get better answers. Also most of the time engineers (use AI to) modify existing codebases rather than making new code.


> AI autocomplete is just a toy

I disagree, when I'm writing code and it just autocompletes the line for me with the correct type with the correct type params set for the generics it saves me the mental effort having to scroll around the file to find the exact type signature I needed. For these small edits it's always right if the information is in the file.


Why do you need an LLM for that? That basic type inference.


i dont know what statically typed language you are using that never requires specifying types. As far as i know only ocaml works this way.

Different languages support different levels of type inference. When im writing rust and typescript im often specifying types this way for structs and function signatures.

With llms what often happens is i write a parameter or variable name and it infers from the name what the type is because i have used the same name somewhere else in my codebase. This would not work without llm autocomplete


Google's systematic, quantitative evaluation of AI coding assistance tools disagrees with your personal and anecdotal findings (which agrees with many other similar studies):

https://research.google/blog/ai-in-software-engineering-at-g...


I don't see a comparison to Non-GenAI auto complete there.

Eclipse wrote half my code characters 15 years ago.


What programming languages do you mostly work in?

I've been wondering if the benefits of AI-autocomplete are more material to people who work in languages like Python and JavaScript that are harder to provide IDE-based autocomplete for.

If you're a Java or TypeScript developer maybe the impact is reduced because you already have great autocomplete by default.


I think typescript benefits even more than JavaScript because the type signatures add so much context. After defining the types and signatures copilot will often complete the body within a few changes of perfect


Yeah +1, the current top LLMs do pretty well with TypeScript


What it shows is that it can be done — in a limited way. Other people might not like those limits and chose to go a different way. I am not sure what's worth lamenting here.


> I am not sure what's worth lamenting here.

The normalisation of surrendering all our code to remove AI cloud gods maybe? The other being a super responsive IDE now having major features have network requests delaying them, although HW requirements likely make that faster for most people.


That sounds a little too spooky for my taste, but you do you. What anything beyond that means, in effect, is that you (not necessarily you) want to chose my values for me (not necessarily me).

I don't see why I should care to have you do that.


Zed is trying to position itself as a next gen hackers editor. Text editing with lessons learned from the multitude of experiments in the space. A flagship feature that is online only and requires me to trust a third party is a deal breaker to many. For instance, my employer would not be pleased if I suddenly started sharing code with another.

Take the AI out of the conversation: if you told your employer you shared the codebase, that’s an insta-fire kind of move.


Yeah but so is I sent a message to s random person about our business vs I sent a message to one of our suppliers


Is Zed a supplier? Sounds like a random developer can sign up at will without any corporate NDAs or other established legal agreements. Will my employer hash out the NDAs for every developer tool that wants to incorporate cloud AI for the lulz?


What's the impact on laptop battery life roughly, when you have a local LLM constantly running in the background of your text editor?


IntelliJ is an IDE, though, not a text editor. If you want a text editor with AI, you may need to wait for Microsoft to bring ChatGPT to notepad.

It seems to be limited to a single CPU core at a time, so depending on your CPU boosting settings, some but not too much.

It's quite snappy despite not using a lot of resources. I tried replicating the effect using a CUDA-based alternative but I could get neither the snappiness nor the quality out of that.


sure, but if it was good more people was use it


running a much worse model at a higher latency (since local GPU power is limited) is a worse experience for Zed.


Hmm. I was excited about Zed, but it now seems painfully clear they’re headed in a completely different direction than I’m interested in. Back to neovim, I guess…


I know I'm not the target market. I don't want my editor to have AI.

I was really looking forward to trying Zed, but this just means I'll stick to VS/Code with the AI gung disabled.

In general, if any product comes with "AI" I'm turned off by it.


I hope all our competitors find your answer inspirational!


Yeah, I also don't want my editor to have syntax highlighting. I refuse to use any editor that has it. Don't even get me started on native vcs support. My rule of thumb is: "If it auto-completes then it gets the auto-yeet."


It's "Find and replace" that makes my blood boil. I won't stand for it.


Amen brother.


[flagged]


I love AI and think Zed is dropping the ball big time here. How can they focus their limited resources into developing shitty AI extensions when they should be focusing on absolute deal-breakers, like cross-platform support?

Nobody cares that they can't use Claude on Zed. Everyone cares that they can't use Zed on Windows.


"Everyone cares that they can't use Zed on Windows."

By "Everyone" I assume you mean "Everyone who uses Windows". I can't imagine us Mac/Linux people care very much.


Yep. It's an easy example to point at which shows how skewed Zed's priorities are. They're seemingly much more interested in adding AI than they are the other 2.7k reported issues on their Github repo. As a Zed user on Mac, that's when I jump ship.


Sure. But that's most users.


> Nobody cares that they can't use Claude on Zed.

I do. I just cancelled Cody subscription because it incredibly slowed down my VSCode. If Zed is able to do advanced AI features that Cody did but remain snappy, I will gladly pay them $20 a month.

> Everyone cares that they can't use Zed on Windows.

There are more users on Windows and Android, but they on average pay less and more often make developer tools purchase decisions as employers, not individuals. If you're starting a small company making new kind of developer tools that is not tailed for corporations, macOS is a much better market.


Hey golergka, sorry to hear about Cody slowing down your VS Code. Would you be able to share more info about this? I haven't seen Cody users complain about this in the past, so trying to understand what the potential issue might have been.

Feel free to email me at ado.kukic@sourcegraph.com if you'd prefer.


TLDR: Zed is pretty sweet. Amodei doesn’t have a next model on raw capability.


I had a brief processing error I was really confused for a moment about how quickly Xed had progressed.

https://en.wikipedia.org/wiki/Xed




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: