Yah-- what we have now is enough to be enormously disruptive as it is absorbed.
The big question is how much better it's going to get. We may be plateauing in capability-- or at least slowing in capability growth-- or we may not be.
But it doesn't need to get better for it to eat a lot of peoples' lunch.
My opinion shifts every other day it seems, recently, Roon, the "famous" anon X account that works at OpenAI, and presumably has access to some unreleased info, said, in response to "are we plateauing soon", "not even close". I honestly don't know what to think.
OTOH Altman has said that next release will be about same advance on GTP-4 as that was over GPT-3. Dario Amodei has said something roughly equivalent - to expect solid gains this year but no reality bender.
I think the real potential gains come when they start extending the architecture, but as long as it's just scaling up and different training/inference regimes, then it seems it'll be more of the same rather than game changer.
Given how connected the whole SF/AI scene appears to be (alchohol + drugs too?), it's hard to imagine a company the size of OpenAI not leaking. If there had been any amazing discoveries there, I think there'd at least be rumors (and not just "they seem to have something called Q* going on").
I feel like for LLM's at least, we've not seen a lot of progress on the high end for a year or so. GPT4-Turbo is a little better than GPT4, but mainly only for things that can benefit from huge contexts.
Even if we are plateauing in GPT number (4,5, 12, whatever), there is still a lot of meat left on those bones.
For instance: Chemists using the AI backwards. Take some phrase like 'vanadium increases the Young's modulus versus silicon in 1040 steel' and then work the AI backwards from that phrase. As in, assume the AI outputted that phrase and see what inputs were most likely to generate it. There may be some real discoveries just by working an AI backwards iteratively.
Simple things like that are still open and attainable right now, it's just that AI is still so young that we really haven't explored all of what they do yet.
The big question is how much better it's going to get. We may be plateauing in capability-- or at least slowing in capability growth-- or we may not be.
But it doesn't need to get better for it to eat a lot of peoples' lunch.