In the API, all GPT‑5 models can accept a maximum of 272,000 input tokens and emit a maximum of 128,000 reasoning & output tokens, for a total context length of 400,000 tokens.
So it's only 270k for input and 400k in total considering reasoning & output tokens.
It's also not available in th EU, likely because of the GDPR. And IIRC the UK implemented something similar to the GDPR after the Brexit. Maybe that's why.
It's part of our process. Here's the internal timeline
T+0 Automatic comms thread created
T+1 XXX Is this a P0, do we need a status page?
@YYY
T+1 YYY Eyes on
T+4 ZZZ Yes
let's get super-generic status page up
@XXX / @YYY - you have one handy?
I see it now thx
Side note: the comments in this are all almost as valuable as the content itself. Remember when comments were a way people used to enhance content rather than attack or degrade it?
There has been a theme of instability with Gitlab.com over the last week or two. I'm not sure if it's growth related (they've seen a steady increase of users/traffic) and they've reached a scaling peak. OR if it's technically related - they've been doing a number of different infrastructure changes over the last few weeks which make a material difference to the main layers of the service.
For me the real test here is how they respond to this. As a paying customer I want to understand the issue, the efforts to prevent this in future and how they communicate this.