Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've used both the 7B and 13B instruction tuned llama weights (quantized using the llama.cpp scripts). Either I am doing something wrong, or these two models are no-where near the level of ChatGPT. Many times they return something totally irrelevant to my question, stop responding, use a different language, or otherwise return the wrong answer. ChatGPT does none of this. (other than the wrong answer due to hallucinating sometimes...)

Reading through the README and issues on the llama.cpp project, there is some speculation that there is a bug in the quantization, or possibly a bug in the inference (less likely I think).

I hope this is true and once fixed the models can perform up to or past the ChatGPT level. If its not true and these models are performing correctly, then either the metrics used to compare it to GPT is garbage and don't capture the real world uses, or the instruction tuning done by the Stanford team is not up to par.



LLama hasn't been fine-tuned with RLHF, so it requires additional prompting, check out the open-assistant[0] project for an open-source ChatGPT equivalent (WIP).

[0]: https://github.com/LAION-AI/Open-Assistant



It looks stupid compared to ChatGPT.

Prompts:

* Tell me about OpenAI.

* Make it shorter, within 50 words.

* Rephrase it, like a rap singer.


> What does OpenAI do? OpenAI is a company founded by Elon Musk and Reid Hoffman in 2015 to develop artificial intelligence products for the general public. They are most well known for releasing a large language model, GPT-3, in 2017, which was developed using a method called deep learning.

OpenAI focuses on creating tools and technologies that allow developers to create AI applications that are accessible to the general public. They are particularly interested in developing tools that can help developers build AI applications that can perform tasks that are important to society, such as in the areas of healthcare, education, and safety.

In addition to GPT-3, OpenAI has also released other AI tools and technologies, including an open-source platform for training AI models, called OpenAI Research, and a library of pre-trained AI models for use in various applications.

Overall, OpenAI’s goal is to make AI technology more accessible and useful to the general public, and they are dedicated to creating products that can help create a more sustainable and inclusive future for society.

> How are they funded?

OpenAI is a not-for-profit organization, so it does not rely on any financial support from individuals, corporations, or governments. Instead, it is funded through a variety of sources, including venture capital firms, research institutions, and individual donations.

Some of the largest financial supporters of OpenAI include Microsoft, Amazon, and Google, who have invested millions of dollars into the organization. Additionally, OpenAI has received financial support from a number of other prominent technology companies and research institutions.

Overall, the funding sources for OpenAI reflect the organization’s goal of creating innovative AI tools and technologies that can have a positive impact on society.

> Who are OpenAI’s competitors?

There are many AI competitors to OpenAI, including companies such as Microsoft, Amazon, Google, Facebook, and Apple. These companies are all investing heavily in developing their own AI products and technologies, and they all have their own goals and objectives in the field.

Some of OpenAI’s competitors are specifically focused on creating AI tools and technologies for the general public, while others are focused on creating AI solutions for specific industries or businesses. For example, Microsoft is known for its popular Microsoft Office products, while Amazon is known for its e-commerce platform.

It’s worth noting that the development of AI technology is a rapidly evolving field, and there will likely be many new competitors and advancements in the years to come.

----

Seems pretty decent to me, sure it's not on the level of ChatGPT but it's also 1/14th the size of the original GPT-3.


This is why Alpaca is a big deal: it shows what LLaMA can do after it's been fine-tuned to follow instructions like ChatGPT has.


Alpaca uses Self-Instruct[0] which is better than just the pre-training but I wouldn't expect it to be at the level of ChatGPT (RLHF) in terms of human-friendly prompting. OpenAssistant should make it close to ChatGPT (from GPT-3.5 version) if the LLaMA is as powerful as claimed.

[0]: https://arxiv.org/abs/2212.10560




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: