>Ensuring that the benefits of AGI are broadly distributed is critical. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas.
I don't see why we should trust OpenAI's promises now, when they've broken promises in the past.
> I don't see why we should trust OpenAI's promises now, when they've broken promises in the past.
I don't see what "our" trust has to do with anything. Perhaps you're an investor in OpenAI and your trust matters to OpenAI and its plans? But for the rest of us, our trust doesn't matter. It would be like me saying, "I don't see why we should trust Saudi Aramco."
> It would be like me saying, "I don't see why we should trust Saudi Aramco."
It's completely fair response to say that if the CEO of Saudi Aramco performatively pens an article on how to mitigate the effects of global warming, while also profiting from it, and engaging in no tangible actions to fix the problem.
My question, rephrased, is "so what"? What is my or our trust worth? What does us claiming we no longer trust Saudi Aramco achieve unless we are investors or perhaps other significant stakeholders?
I totally understand your disenchantment, but if you feel that the mere opinions of the plebs are inconsequential^ and hence pointless, why participate in a public forum at all?
^Demonstrably not true if you look at the history of popular movements that garnered real and durable change, all which gathered momentum from the disgruntled mumblings of the plebs
> but if you feel that the mere opinions of the plebs are inconsequential^ and hence pointless, why participate in a public forum at all?
That gets to the heart of the matter, actually. Personally I participate in order to get new information and learn new ideas. But yeah, being human and flawed, I do end up giving opinions and I notice most people just want to talk about opinions.
But I digress. My question was specifically about the value of saying "I don't trust OpenAI".
Perhaps it was a warning to the naive that might take the article at face value to perhaps reconsider? What is obvious to you, might not be to another. I'd say there are a sizable amount of viewers of this forum are either on the fence or view OpenAI favorably.
And also back to part two of what I said, there's network effects to grumbling. I'd also add, there's a chilling effect to apathy.
Not really. That's why there's a saying, "Opinions are like assholes, everybody has one."
Information has to correspond to reality, the only arbiter of truth. When I give you my opinion I can say pretty much anything, usually they correspond to feelings e.g. "I think if you ask her out she'll say yes. I think this because you're my friend and I like you so surely she will."
But I think you already know that opinions are not information and perhaps you're asking rhetorically and hopefully not trolling me?
They are, at minimum, information about a particular persons values and perspective. Collectively, those individual opinions are what shape elections and foment uprisings.
Nonprofits make a social contract, purporting to operate for the public good, not profit.
Trust that their operations are indeed benefiting the public and they are acting truthfully is important for making that social contract work.
Shady companies doing shady things and keeping shady records doesn't incentivize any type of market participants -- investors, consumers, philanthropists.
> Nonprofits make a social contract, purporting to operate for the public good, not profit.
This is obvious (though I disagree that there is a social contract, and if there is, it's worth the paper it's printed on) and everybody is aware what a nonprofit is. But your reply still doesn't answer my question. Another way of asking it is: how many other non-profits have you audited for trustworthiness before this conversation? What was the impact of your audit?
Or is saying "we can no longer trust Sam Altman" just us twiddling our thumbs so we can signal our virtue to others or comfort ourselves in our own powerlessness? In less than a decade he'll have an army of humanoid sentient robots and probably be the wealthiest person on the planet, and we'll still be yelling "we can no longer trust him"?
> You strike me as the type to be unaware of social norms
You have a weird way of talking to strangers. But, you know what they say about assumptions.
> I have no idea what you're on about regarding sentient robots.
So you're ignorant about both the state and purpose of OpenAI's research as well as the state of the art in robotics. So why am I even talking to an ignorant person? smh.
What initial question? It seems like you are confusing threads (again).
Is that a pun about cannabis?
I just don't think Sam Altman is gonna be the guy to command a droid army, and I also don't think they'll look humanoid, and I also think us saying he is a dipshit in public helps undermine his efforts to waste vital resources pursuing that dystopia that he may or may not want and almost certainly won't meaningfully achieve.
Maybe we're just operating on different assumptions. And maybe we have different goals. Perhaps I'm just replying to weigh down the conversation and thread, dilute Altman's profiteering propaganda.
I don't see why we should trust OpenAI's promises now, when they've broken promises in the past.
See the "OpenAI has a history of broken promises" section of this webpage: https://www.safetyabandoned.org/
In my view, state AGs should not allow them to complete their transition to a for-profit.