Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Privy - Open-source, privacy oriented alternative to GitHub Copilot (github.com/srikanth235)
111 points by srikanth235 8 months ago | hide | past | favorite | 48 comments



Nice.

> Some of the popular LLMs that we recommend are: Mistral, CodeLLama

1. Surprised Mistral (Mixtral?) is recommended for code generation / explanation alongside a fine-tuned CodeLlama?

2. Recent human evals put Microsoft's WaveCoder-Ultra-6.7B (SoTA w/ GPT4) at the top of the rankings with WizardCoder-33B, Magiccoder-S-DS-6.7B trailing: https://twitter.com/TeamCodeLLM_AI/status/174755128687745064...


Typically when a 6.7B model or similar beats a 33B model it's not really true in my experience. At the least I have very a high burden of proof before believing it.


Are you able explain what the charts mean? Only one of the three has wavecoder at the top.


Those charts show pass@k metric (expectation at least k generated samples are correct out of n) on OpenAI and Octopack problem evals for code.

WaveCoder: https://arxiv.org/abs/2312.14187 (section 3.2)

Octopack: https://github.com/bigcode-project/octopack


While testing internally, Mistral worked well. But these models are just starting points. Will add support for the models WaveCoder-Ultra-6.7B, WizardCoder-33B, Magiccoder-S-DS-6.7B, etc soon.


Nice work, but it's unfortunately named. A privy also refers to a latrine.

Given that this project was started well after Continue.dev, I think it would be useful to include an FAQ or a comparison table on what exactly makes this different.

https://github.com/continuedev/continue


I was thinking more in terms of something like secret or private :)

Will add comparison table soon.


I root for this, but runnign LLM locally disqualifies a lot of workstations, for example laptops with integrated graphics.


You can run it on a different machine. I've set up my desktop to host ollama, and use it from my laptop just fine.


Do you use continue.dev or privy with that setup?


I just use Tailscale to make it available on other machines. I use the interface through Emacs.


How is this different from continue.dev?


Tabby is another alternative.

https://github.com/TabbyML/tabby


Tabby does completion while continue is more chat from memory?

There is also refact


Yes, Tabby focuses on auto-completion. We're more similar to continue.dev at this point of time. Will add comparison table soon.


Really wish both completion and chat could be run against the same LLM end point. Most of these seem to be doing their own incompatible docker thing which is annoying


This seems really cool, thanks for sharing!

The thing I find the most valuable about Copilot is it living directly in VSCode. Understand it's not trivial and this has one public commit, but long-term do you have plans to move closer to the editor?

How do you see this in relation to other projects like continue.dev?


Yes, absolutely. The goal is to come as close as possible to GitHub copilot in terms of DX. Will add a comparison table soon with other players in the space like continue.dev, TabbyML etc.


Nice!

Sorry if I ask this here, but is there any web-based version of VSCode other than VSCode.dev or any open source alternative?

I'm looking for a web-based IDE, something advanced like Project IDX

Is there any?


https://github.com/coder/code-server is like vscode.dev but self hosted


Does it do AI autocomplete?


Not as of now!


I am finding myself questioning the choice of a name that also means “toilet” or “outhouse”, but hey, you do you.


I was thinking more in terms of something like secret or private :)


Yes, well, you've basically missed the entirety of the British empire who most of have at some point in not too recent history called a toilet a privy. It's an older term, but still very well recognised.


So, you essentially took my project ( https://github.com/rubberduck-ai/rubberduck-vscode ), and instead of contributing to it or forking it, you recommitted all the code under your own name, with a few small tweaks.

To be fair, I saw that you give credit, but it's kind of disappointing still given that prob 98% of the project is the work of the Rubberduck contributors. I know Rubberduck is not that active any more, but it's still disappointing.


Shout out to OP for ModelFusion. Probably one of the better ways to use LLMs instead of just integrating the OAI API and reinventing the rest of the wheel.

Handles lots of corner cases and lets you swap in other models easily.

https://github.com/lgrammel/modelfusion


Thanks!


They should include the original copyright notice in their version of the software according to your license.


Credit aside, if this is mostly your code there may be a copyright/license issue.

The original license: https://github.com/rubberduck-ai/rubberduck-vscode/blob/f11a...

The new project: https://github.com/srikanth235/privy/blob/9b4f8ce7e176ab45d5... no mention of the original copyright


Hi @lgrammel, my intention was not to undermine the contributions made by you and others. Added all the original missing contributor's list to the repo. The commit link is http://tinyurl.com/2uzdefak.


Thanks, I appreciate it! Good luck with the project!


I am genuinely curious to know, what is your expectation here? He gave you credit in Github. He called it forever "indebted", which IMO is too much. You put out code under MIT license. MIT license means anyone can take the copy and fork it.

Why are you disappointed? Are you disappointed because a person of Indian origin did it better than you Or are you disappointed that because you wanted all fame but he took your code fork (still valid under MIT license) and did a better marketing than you with wrapper.

If you don't want people to use your use, put behind paywall. It is not difficult to understand.


Considering both repositories are on Github, actually forking it instead of re-commiting it all as 'init commit' would be a good start.

As you said, nothing compels srikanth235 to do this though, but it's a generally more respectable (acceptable?) way to continue someone elses work as a new project.

Also not sure why you had to bring the race of srikanth235 into it.


[flagged]


You need to step back, cool down and get some perspective. No rational person on the planet could interpret his critique as having any racism or even racist undertones.

srikanth235 made the right move by acknowledging the contributor list from the previous project that was lost due to this repo not being a fork of the original.


I second this call to step back and get some perspective, and mainly because no comment by the claimant here was racist, and none of the comment in this chain contained any racist undertones. Rather it seems that our green text friend here has jumped to a conclusion and inferred something that wasn't present.


Why are you bringing nationality politics into this when it's clearly not relevant?


My expectation would have been a fork, or a repo thats started with the full commit history.


maybe I'm showing my rural roots here, but a privy is an outhouse. Of course, it also has the "secret" meaning, but...


I suspect it's making use of the 'trusted personal advisor' connotations, similar to the metaphor of having a co-pilot if you will - https://en.wikipedia.org/wiki/Privy_council


Maybe it's referring to "code smell"?


I was thinking more in terms of something like secret or private :)


[flagged]


> I am fairly confused by the amount of work people are willing to devote to runner-up work

You're not entirely wrong, even if off-topic. See also: https://apenwarr.ca/log/20211229


Interesting read.

Please note that this quote partial as it stands is misrepresenting what I meant, but it's for all to see and also not that important :)


Wow, that was an awesome post; funny and insightful. Thanks for sharing!


Thanks for linking. This is actually very well written!


Stopped reading after this:

> Recently


/s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: