Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Best Alternatives to OpenAI ChatGPT?
51 points by danielovichdk 11 months ago | hide | past | favorite | 39 comments
With the last days exposure of OpenAIs bow down to being a for profit company (imo) I would like to hear what else I can use for chatting with an assistant just as good as chatgpt?

Anything as good ?




Not really, no. The whole reason the fiasco occurred in the first place is because it's in a league of its own.

Bard and Claude are likely to be suggested, but the competency gap is palpable. I've tried using them and failed to find them sufficiently compelling to adopt into my workflow for programming tasks.


That's a bit of a generalization from one task though, no?

E.g. bard is hooked into the Google services like Maps and YouTube, so if you want to plan a trip or search contents of a video, it is likely better.


Yeah, it is, I agree. I don't have a workflow where I bounce between LLMs trying to get the best results. I use LLMs when I think I'll get results faster than if I just Google for the answers. After a few letdowns with the competition I stopped defaulting to thinking about how they could help me.

Do you use a variety of LLMs in your daily life? Can you share some queries you'd issue to Bard for Maps/YouTube integrations that ChatGPT fails at? I'd be very interested in some concrete examples. Specifically in a post-plugin world where ChatGPT uses Bing to interface with the web.

I don't want to become blind to the advancements in competitor's tooling.


I’ve found Anthropic’s Claud v2 quite good as a general purpose LLM assistant at a variety of tasks, including code generation and ideation.

You can find it from Anthropic directly[1] or access it through AWS Bedrock[2]. The latter is how I gained access.

1. https://claude.ai/

2. https://aws.amazon.com/bedrock/


If you were using GPT-4 before, then nothing else is going to match it. If you were only using the free version, then you might be able to get by the main competition.


Microsoft Copilot / Bing runs on GPT-4. For free.


Bing's RLHF is a mess compared to ChatGPT. Hopefully they can improve it, but the instruction following leaves something to be desired.


Not true. It’s the same model (AFAIK). Maybe the system prompt is different. But generally you can get the same performance and behaviour from it.


Me to Bing in Edge: "Copilot, are you running on GPT-4?"

Bing: "Hello! This is Bing. I’m sorry, but I’m not Copilot. I’m a chat mode of Microsoft Bing. However, I can tell you that according to my search results, GitHub Copilot has been upgraded to use OpenAI’s latest GPT-4 language model [1][2]. It is now more powerful and offers more accurate code suggestions and explanations[1]."

(Footnotes provided by the AI.)

[1]: https://github.blog/2023-11-08-universe-2023-copilot-transfo...

[2]: https://www.pcmag.com/news/microsoft-uses-gpt-4-to-turn-gith...


This is a reasonable response. There was ambiguity in you addressing it directly versus its internal name not being copilot. It clarifies and attempts to give an answer anyways


How did you come to the conclusion that it was RLHF?


You will not find anything as good as the paid version of ChatGPT. You might try Phind [0].

[0] https://www.phind.com/


Azure OpenAI GPT-4


Claude (Anthropic) might be the closest direct alternative to ChatGPT (but it's not available in alls countries). You might also want to try ChatDolphin by NLP Cloud (a company I created 3 years ago as an OpenAI alternative): https://chat.nlpcloud.com Open-source is also catching up very quickly. The best models you might want to try today are LLaMA 2 70B, Yi 34B, or Mistral 7B.


You could Llama2 70B and Mistral. Perplexity Labs are currently offering them on their website - https://labs.perplexity.ai/

You can choose which one you need from the drop-down.

I've not tested it thoroughly or anything, but LLMs have a wide variety of uses. The non-GPT ones suffice for quite a few, sometimes even better (than the free version, I've not tried GPT4)


For coding and "reasoning" tasks, there isn't any that can match GPT4. You can try Bing but it is slightly nerfed in my experience.

For general writing, simple discussion, basic knowledge etc, Claude is pretty good. In fact, I find Claude's writing a lot better than GPT. It sounds more "normal".

For vision and API calls and developing apps and such, unfortunately, GPT is king and anything else pales in comparison.


Nothing is as good. Bard is quite good but not the same.


As others have mentioned, it's not there yet.

One good thing we can hope to expect from the OpenAI drama is that it is now very clear that we need truly open and powerful models. A new generation of startups can't be bottlenecked so critically on a single service. Hopefully OpenAI employees see this as well and do what they can to help decentralize generative AI.


200+ alternative LLMs here:

https://lifearchitect.ai/models-table/

GPT-4 is the largest right now, by a factor of about 100x:

https://lifearchitect.ai/models/


Phind.com’s custom model (which is I think publicly available) is pretty good for coding tasks. It helps enormously that it uses RAG with web search. I use it all the time and also GPT-4, and GPT-4 is better, but Phind’s model is close and also very fast. I have nothing to do with the company, and am just a random (free) user.


Llama 2 (and variants). Has the lowest hallucination rate (https://github.com/vectara/hallucination-leaderboard), and its open source and so we know what went into it, and the community can improve it


What happened in the last few days that changed your opinion of how OpenAI was operating? Surely you didn’t previously think the board of the non-profit had, until that point, constrained in any way the development of ChatGPT…


No. But that holds especially true for subtle edge cases, rather than the general case. You might be able to get answers to "common" or "simple" questions just as well using Mistral7B or Llama 2 13B.


There are many local and third party good LLMs, but there is no ChatGPT style FOSS interface which can do math, browse the web and read pdfs at a minimum. Does anybody know any?


Try Perplexity AI: https://www.perplexity.ai/


I asked Perplexity which LLM powers it and it said OpenAI GPT-3.


That doesn’t mean anything. I have some fine tunes of Llama that will claim they are GPT-3 all day long. Models have no concept of an identity.


Eh, disagree. It means a lot. It's either hallucinating on a trivial use case or it's correct and not useful to OP. One of the seed prompts for GPT establishes what it is. I would find it odd if other, commercialized LLMs don't do the same since any sane PM would say "Wow, this hallucination is absolutely terrible for marketing ourselves. Fix it."

That said, https://blog.perplexity.ai/technical-faq/what-models-does-co... -- it's powered by OpenAI's GPT.


Try here - https://labs.perplexity.ai/

From the drop-down you can select Llama, Mistral and Perplexity's own models



Claude is useful. Bard seems good enough but for more serious usages you probably want to wait for Gemini.


to second this, I really want an LLM that can call a REST API like ChatGPT plugins can. What is out there?

for this use case, it might be okay if it’s not as intelligent as GPT 4


> to second this, I really want an LLM that can call a REST API like ChatGPT plugins can. What is out there?

That's not really a model feature but a prompting + toolchain feature. Conceptually, there's no reason you couldn't build a parser for the chatGPT plugin description json, the descriptions of existing plugins, and prompt any model wrapped in a tool-using framework to call them. That's all outside of the model itself.


I have used llama2 to do something like this. I build a prompt with a set of opcodes, the llm responds with an opcode and an external system handles the call. Not sure if that is what you are looking for however


that sounds close, but how much tooling do you have to hand-roll for it?


Well I had the LLM write the tooling for me. In my case I had an air gapped system that based its input on the image of that systems desktop. The LlM wrote the code to capture images from the capture card and determine what window was active. That became an opcode. The llm wrote a function to switch windows using a custom hardware module I wrote (presented as a usb hid keyboard to the remote system). Another opcode. I then had enough opcodes and the necessary code to perform basic actions on a non-networked pc.

Then I would have tooling that I mostly built to generate a prompt with the opcodes and have the tooling determine the next steps based on the input and desired output.

So I had to hand roll all of the tooling, but the LLM did 80% of that work.


So in your opinion being for profit is bad?

Interesting.

Specifically on OAI there were several threads here before discussing how much money it was loosing and if they might not make it.


you.com


I just asked YouChat which LLM powers it and it responded by saying OpenAI's GPT3.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: