I feel like I shouldn't love x86 encoding, but there is something charming about this. Probably echoing its 8-bit predecessors. It seems like it's designed for tiny memory environments (embedded, bootstrapping, etc.) where you don't mind taking a hit for memory access.
Besides Claude.vim for "AI pair programming"? :)
(tbh it works well only for small things)
I'm using Codeium and it's pretty decent at picking up the right context automatically, usually it autocompletes within ~100kLoC project quite flawlessly. (So far I haven't been using the chat much, just autocomplete.)
Yep! And AWS Bedrock gives you also plenty of other models on the back end, plus better control over rate limits. (But for us the important thing is data residency, the code isn't uploaded anywhere.)
yup! feel free to add the client support, you are on the right track with the changes.
To test the whole flow out here are a few things you will want to do:
- https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... (you need to create the LLMProperties object over here)
- add support for it in the broker over here: https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186...
- after this you should be at the very least able to test out Cmd+K (highlight and ask it to edit a section)
- In Aide, if you go to User Settings: "aide self run" you can tick this and then run your local sidecar so you are hitting the right binary (kill the binary running on 42424 port, thats the webserver binary that ships along with the editor)
If all of this sounds like a lot, you can just add the client and I can also take care of the plumbing!
Hmm looks like this is still pretty early project for me. :)
My experience:
1. I didn't have a working installation window after opening it for the first time. Maybe what fixed it was downloading and opening some random javascript repo, but maybe it was rather switching to "Trusted mode" (which makes me a bit nervous but ok).
2. Once the assistant window input became active, I wrote something short like "hi", but nothing happenned after pressing ctrl-Enter. I rageclicked around a bit, it's possible I have queued multiple requests. About 30 seconds later, suddenly I got a reply (something like "hi what do you want me to do"). That's .. not great latency. :)
3. Since I got it working, I opened the sidecar project and sent my second assistant prompt. I got back this response after few tens of seconds: "You have used up your 5 free requests. Please log in for unlimited requests." (Idk what these 5 requests were...)
I gave it one more go by creating an account. However after logging in through the browser popup, "Signing in to CodeStory..." spins for a long time, then disappears but AIDE still isn't logged in. (Even after trying again after a restart.)
> 2. Once the assistant window input became active, I wrote something short like "hi", but nothing happenned after pressing ctrl-Enter. I rageclicked around a bit, it's possible I have queued multiple requests. About 30 seconds later, suddenly I got a reply (something like "hi what do you want me to do"). That's .. not great latency. :)
Yup thats cause of the traffic and the LLM rate limits :( we are getting more TPM right now so the latency spikes should go away, I had half a mind to spin up multiple accounts to get higher TPM but oh well.... if you do end up using your own API Key, then there is no latency at all, right now the requests get pulled in a global queue so thats probably whats happening.
> 3. Since I got it working, I opened the sidecar project and sent my second assistant prompt. I got back this response after few tens of seconds: "You have used up your 5 free requests. Please log in for unlimited requests." (Idk what these 5 requests were...)
The auth flow being wonky is on us, we did fuzzy test it a bit but as with any software it slipped from the cracks. We were even wondering to skip the auth completely if you are using your own API Keys, that way there is 0 touch interaction with our llm proxy infra.
Thanks for the feedback tho, I appreciate it and we will do better
Postgres is the only thing on my Debian that doesn't seamlessly automatically upgrade across dist-upgrades, but instead leaves old versions around for me to deal with manually... which I seem to never get around to.
No, what I meant is that the install path for Postgres on Debian involves installing versioned packages. It's the only approved way of installing Postgres from the Debian repos that I'm aware of.
Interesting idea! So... how strong is it, compared to some benchmarks? :-)
Given LLM performance on chess, I find it conceivable it could get to a similar level to GNUGo? (It'd be interesting if LLMs were generally on par with simple alpha-beta.)
Thanks for links. Let me set up a battle between GNUGo and GoFormer.
I was also inspired by how well it played chess, so I am trying to train one to challenge Go.
It is still an early version as I am still enhancing the model with more data. But it already exhibited reasonable move as I played with it.
I was so blown away I just wrote a chat-based integration to vim / neovim (https://github.com/pasky/claude.vim) for it - actually, 95% of the 600 LoC is written by Claude. It is finally on a level where "pair programmer" becomes a way better UX than the "code completion" paradigm.
(BTW I think with some further AI-assisted work and using the tools interface, you could mostly match all the web interface features in an integration like this. And using API is less brutally rate limited.)
The extension works on hackernews but it doesn't work on any site that's marked experimental (not even on github, shown as example in the userguide), it always shows "Compose Now Not Available". also tried Compose.AI to compose this message but all suggestions just threw away some of the info
I just checked into it. I think the misunderstanding is that force enable only enables autocomplete. Compose Now and Rephrase still misbehave on the majority of websites and make them hard to use, so we didn't include them in the force enable functionality. There's a note about it on the Notion doc Michael linked, but that isn't clear enough in the product. Thanks for letting us know! I'll fix that!
Let me know if you're actually not seeing Github show autocomplete either. I'd love to debug that, but I checked again and it seems to be working for me right now.
I'm curious what you mean by "also tried Compose.AI to compose this message but all suggestions just threw away some of the info"
Compose Now is only available on certain websites, but we are working on enabling it everywhere else. You can find a list of websites we are officially integrated with here: https://composeai.notion.site/Supported-Tools-0f3ee54d5ef04d.... When a website is "experimental", you can force enable the extension (autocomplete) there but may encounter some issues. Thanks for pointing out the Github example—we'll look into that.