Hacker News new | past | comments | ask | show | jobs | submit login

Any short-term plans for Claude via AWS Bedrock? (That's for me personally a blocker for trying it on our main codebase.)





Thanks for your interest in Aide!

If I understood that correctly, it would mean supporting Claude via the AWS Bedrock endpoint, we will make that happen.

If the underlying LLM does not change then adding more connectors is pretty easy, I will ping the thread with updates on this.


Yep! And AWS Bedrock gives you also plenty of other models on the back end, plus better control over rate limits. (But for us the important thing is data residency, the code isn't uploaded anywhere.)

Is it ~just about adding another file to https://github.com/codestoryai/sidecar/blob/main/llm_client/... ?

I could take a look too - another way for me to test Aide by working with it to implement this. :-)

(https://github.com/pasky/claude.vim/blob/main/plugin/claude_... is sample code with basic wrapper emulating Claude streaming API with AWS Bedrock backend.)


yup! feel free to add the client support, you are on the right track with the changes.

To test the whole flow out here are a few things you will want to do: - https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... (you need to create the LLMProperties object over here) - add support for it in the broker over here: https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... - after this you should be at the very least able to test out Cmd+K (highlight and ask it to edit a section) - In Aide, if you go to User Settings: "aide self run" you can tick this and then run your local sidecar so you are hitting the right binary (kill the binary running on 42424 port, thats the webserver binary that ships along with the editor)

If all of this sounds like a lot, you can just add the client and I can also take care of the plumbing!


Hmm looks like this is still pretty early project for me. :)

My experience: 1. I didn't have a working installation window after opening it for the first time. Maybe what fixed it was downloading and opening some random javascript repo, but maybe it was rather switching to "Trusted mode" (which makes me a bit nervous but ok).

2. Once the assistant window input became active, I wrote something short like "hi", but nothing happenned after pressing ctrl-Enter. I rageclicked around a bit, it's possible I have queued multiple requests. About 30 seconds later, suddenly I got a reply (something like "hi what do you want me to do"). That's .. not great latency. :)

3. Since I got it working, I opened the sidecar project and sent my second assistant prompt. I got back this response after few tens of seconds: "You have used up your 5 free requests. Please log in for unlimited requests." (Idk what these 5 requests were...)

I gave it one more go by creating an account. However after logging in through the browser popup, "Signing in to CodeStory..." spins for a long time, then disappears but AIDE still isn't logged in. (Even after trying again after a restart.)

One more thought is maybe you got DDos'd by HN?


> 2. Once the assistant window input became active, I wrote something short like "hi", but nothing happenned after pressing ctrl-Enter. I rageclicked around a bit, it's possible I have queued multiple requests. About 30 seconds later, suddenly I got a reply (something like "hi what do you want me to do"). That's .. not great latency. :)

Yup thats cause of the traffic and the LLM rate limits :( we are getting more TPM right now so the latency spikes should go away, I had half a mind to spin up multiple accounts to get higher TPM but oh well.... if you do end up using your own API Key, then there is no latency at all, right now the requests get pulled in a global queue so thats probably whats happening.

> 3. Since I got it working, I opened the sidecar project and sent my second assistant prompt. I got back this response after few tens of seconds: "You have used up your 5 free requests. Please log in for unlimited requests." (Idk what these 5 requests were...)

The auth flow being wonky is on us, we did fuzzy test it a bit but as with any software it slipped from the cracks. We were even wondering to skip the auth completely if you are using your own API Keys, that way there is 0 touch interaction with our llm proxy infra.

Thanks for the feedback tho, I appreciate it and we will do better




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: