Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI GPT-3.5-turbo-instruct released
25 points by Palmik on Sept 18, 2023 | hide | past | favorite | 21 comments
I could not find a blog post as of now, so here's the announcement email:

Hello! We are excited to announce the release of gpt-3.5-turbo-instruct, our latest model that serves as a replacement for several deprecated models, such as text-davinci-003.

Key Features: Gpt-3.5-turbo-instruct is an InstructGPT 3.5 class model. It’s trained similarly to previous Instruct models such as the text-davinci series while maintaining the same speed as our turbo models.

Pricing: We are committed to making cutting-edge technology accessible, so we have priced gpt-3.5-turbo-instruct in line with our other turbo GPT-3.5 models with 4K context.

Thank you for being a part of our journey and for building with OpenAI. Your support enables us to continually advance and bring you the best in AI technology.

Best regards, The OpenAI team




A company of your choice wants to expand its operation to a foreign country of your choice. The company has two investment proposals namely Project X and Project Y. Project X Project Y Initial Investment 200,000 300,000 Net Cash flow Year 1 50,000 200,000 Year 2 100,000 100,000 Year 3 100,000 50,000 Year 4 30,000 30,000 Year 5 20,000 20,000

The discount rate of return @ 10%. Year 1 Year 2 Year 3 Year 4 Year 5 0.909 0.826 0.751 0.683 0.621 Analyse the financial statement using ratio analysis. 2. Appraise the investment using the techniques such as NPV/Payback / Discounted Payback /Accounting Rate of Return / Net Present Value/ Internal Rate of Return 3. Identify the sources of Fund 4. Conduct various environment analysis using PESTEL, Five Forces Framework, Core Competences, Value Chain. 5. Portfolio analysis using various techniques of strategic management 6.Analyse the significance of stakeholder analysis.


"GPT-4 plays chess at a strong club level" according to @BorisMPower, whose X/Twitter profile says "Member of Technical Staff @OpenAI". And today @GrantSlatton claimed that gpt-3.5-turbo-instruct plays chess at 1800+ vs Stockfish when prompted using PGN.

On Aug 10, 2023, @BorisMPower wrote (https://x.com/BorisMPower/status/1689838493806333953):

> [...] GPT-4 plays chess at a strong club level when properly prompted, which is impossible to achieve without having a good internal model of the game.

> Even at Go, the model does ~10x better than random, by essentially picking up on locality being a strong signal.

> I don’t think anything has been published unfortunately. ELO is around 1800

On Sep 18, 2023, @GrantSlatton wrote (https://x.com/GrantSlatton/status/1703913578036904431):

> The new GPT model, gpt-3.5-turbo-instruct, can play chess around 1800 Elo.

> I had previously reported that GPT cannot play chess, but it appears this was just the RLHF'd chat models. The pure completion model succeeds.

> The new model readily beats Stockfish Level 4 (1700) and still loses respectably to Level 5 (2000). Never attempted illegal moves. Used clever opening sacrifice, and incredibly cheeky pawn & king checkmate, allowing the opponent to uselessly promote.

> https://lichess.org/K6Q0Lqda

> I used this PGN style prompt to mimic a grandmaster game.

> The highlighting is a bit wrong. GPT made all its own moves, I input Stockfish moves manually.

> h/t to @zswitten for this prompt style

> [OpenAI Playground screenshot showing PGN game in the prompt]

I was able to reproduce this just now: gpt-3.5-turbo-instruct, prompted with PGN, defeated Stockfish Level 4 (1700?) on LiChess (https://lichess.org/D39lnanQ).

Here are the prompts/code I used: https://github.com/jordancurve/gpt-vs-stockfish/blob/main/ga...


Hey folks, any of you tried with this stream=True? It works in the playground, but it appears that you cannot actually stream completions from this model via the python openai package.

Wondering if it's a weird PEBKAC or someone else has had the same experience?


It's a completion model, which can work a lot better for certain use cases.

Eg I can ask it to generate a huge chunk of code and it doesn't try to give an "example", it generates a realistically long bit of code.

This is great particularly when you want to generate an entire webpage, vs having a chat with an agent that's going to tell you how to build a webpage yourself with small snippets of example code.


Is this a 175B or 1.3B InstructGPT? Afaik the original InstructGPT used to RLHF into ChatGPT was 1.3B (https://openai.com/research/instruction-following)

If I understand correctly this one is actually a 175B?


This looks great, but I don't see any mention on the performance. TPM rate limit is almost x3, which helps a lot of non-conversational tasks, so it would be a huge upgrade if the output quality can be equivalent or better than the chat model.


The existing gpt-3.5-turbo model is also an instruct model. So what's different?

I didn't see an email. Are you sure this is not about something that happened last year?

Maybe this one is designed to be more compatible with the way people use text-davinci-003.


It's not a chat model, it's like the older GPT3 instruct models like text-davinci-003 but as cheap as regular gpt-3.5-turbo

It's definitely new, I got the email too


I’m really confused…I thought gpt-3-turbo was already an instruct model. Was this not the case? Was this the reason why we always had to add a chat prompt before any message?


Turbo was tuned for chat specifically. This is just for following instructions.


Beginners question but can you give an example of the difference? Does this mean if I ask it something, it won’t go Sure! despite asking it not to?


Something like that yeah. It'll try to complete sequences according to instruction (if you have any) or it'll just complete the sequences. it's basically not going to try chat with you unless you paste in a chat transcript.


Can anyone confirm being able to use it with the API?


I haven't checked an email but using the completions api I do get a response using model "gpt-3.5-turbo-instruct". Interestingly the completions api is deprecated though? Are they bringing it back?

I see it's briefly mentioned in their docs under the deprecations section.

https://platform.openai.com/docs/deprecations

> Note: The recommended replacement, gpt-3.5-turbo-instruct, has not yet launched. Impacted customers will be notified by email when it becomes available.


Yes, I've called it using the Python library like this:

    response = openai.Completion.create(
        model="gpt-3.5-turbo-instruct",
        prompt="Write a tagline for an ice cream shop."
    )
More notes here: https://github.com/simonw/llm/issues/284


Running the curl below, I see it as available from the API:

curl https://api.openai.com/v1/models/gpt-3.5-turbo-instruct -H "Authorization: Bearer <YOUR_TOKEN>"


Yes, I just started using it in a chatbot (it's less corporate-speaky than ChatGPT)


Does it allow batching like davinci-003?


Would it have 16k or larger context ? 4k too small for us.


awesome! can we fine-tune this model too?


Had the same question. Did you figure out?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: