Typically when a 6.7B model or similar beats a 33B model it's not really true in my experience. At the least I have very a high burden of proof before believing it.
While testing internally, Mistral worked well. But these models are just starting points. Will add support for the models WaveCoder-Ultra-6.7B, WizardCoder-33B, Magiccoder-S-DS-6.7B, etc soon.
Nice work, but it's unfortunately named. A privy also refers to a latrine.
Given that this project was started well after Continue.dev, I think it would be useful to include an FAQ or a comparison table on what exactly makes this different.
Really wish both completion and chat could be run against the same LLM end point. Most of these seem to be doing their own incompatible docker thing which is annoying
The thing I find the most valuable about Copilot is it living directly in VSCode. Understand it's not trivial and this has one public commit, but long-term do you have plans to move closer to the editor?
How do you see this in relation to other projects like continue.dev?
Yes, absolutely. The goal is to come as close as possible to GitHub copilot in terms of DX. Will add a comparison table soon with other players in the space like continue.dev, TabbyML etc.
Yes, well, you've basically missed the entirety of the British empire who most of have at some point in not too recent history called a toilet a privy. It's an older term, but still very well recognised.
So, you essentially took my project ( https://github.com/rubberduck-ai/rubberduck-vscode ), and instead of contributing to it or forking it, you recommitted all the code under your own name, with a few small tweaks.
To be fair, I saw that you give credit, but it's kind of disappointing still given that prob 98% of the project is the work of the Rubberduck contributors. I know Rubberduck is not that active any more, but it's still disappointing.
Shout out to OP for ModelFusion. Probably one of the better ways to use LLMs instead of just integrating the OAI API and reinventing the rest of the wheel.
Handles lots of corner cases and lets you swap in other models easily.
Hi @lgrammel, my intention was not to undermine the contributions made by you and others. Added all the original missing contributor's list to the repo. The commit link is http://tinyurl.com/2uzdefak.
I am genuinely curious to know, what is your expectation here? He gave you credit in Github. He called it forever "indebted", which IMO is too much. You put out code under MIT license. MIT license means anyone can take the copy and fork it.
Why are you disappointed? Are you disappointed because a person of Indian origin did it better than you Or are you disappointed that because you wanted all fame but he took your code fork (still valid under MIT license) and did a better marketing than you with wrapper.
If you don't want people to use your use, put behind paywall. It is not difficult to understand.
Considering both repositories are on Github, actually forking it instead of re-commiting it all as 'init commit' would be a good start.
As you said, nothing compels srikanth235 to do this though, but it's a generally more respectable (acceptable?) way to continue someone elses work as a new project.
Also not sure why you had to bring the race of srikanth235 into it.
You need to step back, cool down and get some perspective. No rational person on the planet could interpret his critique as having any racism or even racist undertones.
srikanth235 made the right move by acknowledging the contributor list from the previous project that was lost due to this repo not being a fork of the original.
I second this call to step back and get some perspective, and mainly because no comment by the claimant here was racist, and none of the comment in this chain contained any racist undertones. Rather it seems that our green text friend here has jumped to a conclusion and inferred something that wasn't present.
I suspect it's making use of the 'trusted personal advisor' connotations, similar to the metaphor of having a co-pilot if you will - https://en.wikipedia.org/wiki/Privy_council
> Some of the popular LLMs that we recommend are: Mistral, CodeLLama
1. Surprised Mistral (Mixtral?) is recommended for code generation / explanation alongside a fine-tuned CodeLlama?
2. Recent human evals put Microsoft's WaveCoder-Ultra-6.7B (SoTA w/ GPT4) at the top of the rankings with WizardCoder-33B, Magiccoder-S-DS-6.7B trailing: https://twitter.com/TeamCodeLLM_AI/status/174755128687745064...