Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's 272,000 input tokens and 128,000 output tokens.




The website clearly lays them out as 400k input and 128k output [1]. I just updated my AI apps to support the new models. I routinely fill the entire context on large code calls. Input is not a "shared" context.

I found 100k was barely enough for a single project without spillover, so 4x allows for linking more adjacent codebases for large scale analysis.

[1] https://platform.openai.com/docs/models/gpt-5


Oh, I had not grasped that the “context window” size advertised had to include both input and output.

But is it really 272k even if the output was say 10k? Cause it does say “max output” in the docs, so I wonder


This is the only model where the input limit and the context limit are different values. OpenAI docs team are working on updating that page.

Woah that's really kind of hidden. But I think you can specify max output tokens. Need to test that!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: