I had Claude add it as an edit-prediction provider (running locally on llama.cpp on my Macbook Pro). It's been working well so far (including next-edit prediction!), though it could use more testing and tuning. If you want to try it out you can build my branch: https://github.com/ihales/zed/tree/sweep-local-edit-predicti...
If you have llama.cpp installed, you can start the model with `llama-server -hf sweepai/sweep-next-edit-1.5B --port 11434`
Other settings you can add in `edit_predictions.sweep_local` include:
- `model` - defaults to "sweepai/sweep-next-edit-1.5B"
- `max_tokens` - defaults to 2048
- `max_editable_tokens` - defaults to 600
- `max_context_tokens` - defaults to 1200
I haven't had time to dive into Zed edit predictions and do a thorough review of Claude's code (it's not much, but my rust is... rusty, and I'm short on free time right now), and there hasn't been much discussion of the feature, so I don't feel comfortable submitting a PR yet, but if someone else wants to take it from here, feel free!
This is great and similar to what I was thinking of doing at some point. I just wasn't sure if it needed to be specific to Sweep Local or if it could be a generic llama.cpp provider.
I was thinking about this too. Zed officially supports self-hosting Zeta, and so one option would be to create a proxy that uses the Zeta wire format, but is packed by llama.cpp (or any model backend). In the proxy you could configure prompts, context, templates, etc., while still using a production build of Zed. I'll give it a shot if I have time.
Hey HN! I've been a fan of Snowflake since I started using it in my first job out of school. But, powerful as it is, it still comes with its fair share of challenges/gotchas. I've been digging into Snowflake metadata and access controls over the last several months as part of my work on Jetty Core[1], and thought about writing some sort of white paper about best practices to share some of the things I've learned. It turns out there are a lot of those already, so instead I decided to build a living white paper instead.
Jetty Scorecard is an open-source python library/app that connects to a Snowflake account and provides insights and recommendations specific to your configuration. Today it runs 17 checks[2], providing info about things like which tables are the most popular, whether you have misconfigured masking policies, and whether your future grants are being ignored by the system[3].
If you have llama.cpp installed, you can start the model with `llama-server -hf sweepai/sweep-next-edit-1.5B --port 11434`
Add the following to your settings.json:
```
```Other settings you can add in `edit_predictions.sweep_local` include:
- `model` - defaults to "sweepai/sweep-next-edit-1.5B"
- `max_tokens` - defaults to 2048
- `max_editable_tokens` - defaults to 600
- `max_context_tokens` - defaults to 1200
I haven't had time to dive into Zed edit predictions and do a thorough review of Claude's code (it's not much, but my rust is... rusty, and I'm short on free time right now), and there hasn't been much discussion of the feature, so I don't feel comfortable submitting a PR yet, but if someone else wants to take it from here, feel free!