Hacker News new | past | comments | ask | show | jobs | submit login

Because they do shady shit, like, by default Copilot would "sample" code for training while using it. Maybe this is no longer the default, maybe it still is, but it was the default.

This type of thing erodes trust? Why should my proprietary code be used for training by default?

I was really annoyed by this.




OpenAI is not the same company as GitHub, and it has always been pretty clear that chats on ChatGPT are recorded and used for training (unless you now opt out).


Not sure why you're bringing OpenAI into it. My comment and the article is about "Copilot"

I'm talking about when using "Github Copilot" and you ask for a code suggestion, it would send the "context" back to GitHub / Microsoft and use that code as training.

Your comment is interesting to me though because there does seem to be a surprisingly large amount of defending OpenAI going on. Almost seems automatic now.


Because Github Copilot is an interface into OpenAI Codex:

"GitHub Copilot is powered by OpenAI Codex, a new AI system created by OpenAI."

https://docs.github.com/en/copilot/overview-of-github-copilo...


> it would send the "context" back to GitHub / Microsoft

Because this is fundamentally how the system works. The context is the prompt.

> and use that code as training

This part has never been true. It’s not how these systems work. Do you have anything to back up your claim?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: