Reading the comments in this thread, with the rightful distrust of OpenAI and criticism of the model, it occurs to me that the underlying problem we’re facing here comes down to stakeholders and incentive structures.
Ai will not be a technical problem (nor a solution!), rather our civilization will continue to be bottlenecked by problems of culture. OpenAI will succeed/fail for cultural reasons, not technical ones. Humanity will benefit from or be harmed by ai for cultural reasons, not technical ones.
I don’t have any answers here. I do however get the continuous impression that ai has not shifted the ground under our feet. The bottlenecks and underlying paradigms remain basically the same.
> it occurs to me that the underlying problem we’re facing here comes down to stakeholders and incentive structures.
>(...) our civilization will continue to be bottlenecked by problems of culture. OpenAI will succeed/fail for cultural reasons, not technical ones. Humanity will benefit from or be harmed by ai for cultural reasons, not technical ones.
It's capitalism. You can say it, it's OK. Warren Buffet isn't going to crawl out of your mirror and stab you.
All of the "cultural" problems around AI come down to profit being the primary motive driving and limiting innovation.
Ai will not be a technical problem (nor a solution!), rather our civilization will continue to be bottlenecked by problems of culture. OpenAI will succeed/fail for cultural reasons, not technical ones. Humanity will benefit from or be harmed by ai for cultural reasons, not technical ones.
I don’t have any answers here. I do however get the continuous impression that ai has not shifted the ground under our feet. The bottlenecks and underlying paradigms remain basically the same.