Hacker News new | past | comments | ask | show | jobs | submit login

It's hard to believe a Board that can't control itself or its employees could responsibly manage AI. Or that anyone could manage AGI.

There is a long history of governance problems in nonprofits (see the transaction-cost economics literature on point). Their ambiguous goals induce politics. One benefit of profit-driven boards is that the goals make only well-understood risk trade-off's between growth now or later, and the board members are selected for their actual stake in that actual goal.

This is the problem with religious organizations and ideological governments: they can't be trusted, because they will be captured by their internal politics.

I think it would be much more rational to make AI/AGI an entirely for-profit enterprise, BUT reverse the liability defaults and require that they pay all external costs resulting from their products.

Transaction cost economics shows that in theory that it doesn't matter where liability is allocated so long as the transaction cost of redistributing liability is near zero (i.e., contract in advance and tort after are cheap), because then parties just work it out. Government or laws are required only to make up for the actual non-zero dispute transaction cost by establishing settled expectation.

The internet and software generally has been a domain where consumers have NO redress whatsoever for exported costs. It's grown (and disrupted) fantastically as a result.

So to control AI/AGI, make it for-profit, but flip liability to require all exported costs to be paid by the developer. That would ensure applications are incredibly narrow AND have net-positive social impact.




I appreciate this argument, but I also think naked profit seeking is the cause of a lot of problems in our economy and there are qualities that are hard to quantify when you structure the organization around it. Blindly following the economic argument can also cause problems, and it's a big reason why American corporate culture moved away from building a good product first towards maximizing shareholder value. The OpenAI board certainly seems capricious and impulsive given this decision though.


On board with this. Arguing that a for-profit is somehow the moral position over a non-profit because money is tangible while the idea of doing good is not well-defined.. feels like something a Rockefeller owned newspaper from the Industrial Revolution would have printed.


Yeah that's right. There's a blogger in another post on HN that makes the same point at the very end: https://loeber.substack.com/p/a-timeline-of-the-openai-board


Super interesting link there. You should submit it, if no one has yet.

"Governance can be messy. Time will be the judge of whether this act of governance was wise or not." (Narrator: specifically, about 12 hours.) "But you should note that the people involved in this act of corporate governance are roughly the same people trying to position themselves to govern policy on artificial intelligence.

"It seems much easier to govern a single-digit number of highly capable people than to “govern” artificial superintelligence. If it turns out that this act of governance was unwise, then it calls into serious question the ability of these people and their organizations (Georgetown’s CSET, Open Philanthropy, etc.) to conduct governance in general, especially of the most impactful technology of the hundred years to come. Many people are saying we need more governance: maybe it turns out we need less."


From that link:

>I could not find anything in the way of a source on when, or under what circumstances, Tasha McCauley joined the Board.

I would add, "or why she's on the board or why anyone thought she was qualified to be on the board".

At least with Helen Toner the intent was likely just to add a token AI Safety academic to pacify "concerned" Congressmen.

I am kind of curious how Adam D'Angelo voted. If he voted against removing Sam that would make this even more of a farce.


D’Angelo had to have voted in favor because otherwise they don’t get a four vote majority.


You only need 4 votes to have a majority if Sam and Greg were present for the vote, which neither were. Ilya + the 2 stooges voting in favor and D'Angelo voting against would be a 3-1 majority.


I am not an expert, but I don't think that is the way it works. My guess is that the only reason that they could vote without Sam and Greg there is because they had a majority even if they were there. That means they had 4 votes, and that means all other board members voted against Sam and Greg.

It does not seem reasonable that only some members of a board could get together and vote things without others present. This would be chaos.


Is their corporate charter public? I couldn't find it on their website.


> Their ambiguous goals induce politics. [...] This is the problem with religious organizations and ideological governments: they can't be trusted, because they will be captured by their internal politics.

Yes, of course. But that's because "doing good" is by definition much more ambiguous than "making money". It's way higher dimension, and it has uncountable definitions.

So nonprofits will by definition involve more politics at the human level. I'd say we must accept that if we want to live amongst the actions of nonprofits rather than just for-profits.

To claim that "politics" are a reason something "can't be trusted" is akin to saying involvement of human affairs means something can't be trusted (over computers). We must imagine effective politics, or else we cannot imagine effective human affairs -- only mechanistic affairs of simple optimization systems (like capitalist markets)


Yeah, there's no governance problems in for-profit companies that have led to, for example, the smoking epidemic, the opioids epidemic, the impending collapse of the planet's biosphere, all for the sake of a dime.


The solution is to replace the board members with AGI entities, isn't it? Just have to figure out how to do the real-time incorporation of current data into the model. I bet that's an active thing at OpenAI. Seems to have been a hot discussion topic lately:

https://www.workbyjacob.com/thoughts/from-llm-to-rqm-real-ti...

The real risk is that some government will put the result in charge of their national defense system, aka Skynet, not that kids will ask it how to make illegal drugs. The curious silence on military-industrial applications of LLMs makes me suspect this is part of the OpenAI story... Good plot for a novel, at least.


> The real risk is that some government will put the result in charge of their national defense system, aka Skynet, not that kids will ask it how to make illegal drugs.

These cannot possibly be the most realistic failure cases you can imagine, are they? Who cares if "kids" "make illegal drugs?" But yeah, if kids can make illegal drugs with this tech, then actual bad actors can make actual dangerous substances with this tech.

The real risk is manifold and totally unforeseeable the same way that a 400 Elo chess player has zero conception of "the risks" that a 2000 Elo player will exploit to beat them.


Every bad actor who wants to make dangerous substances can find that information in the scientific literature with little difficulty. An LLM, however, is probably not going to tell you that the mostly likely outcome of a wannabe chemist trying to cook up something or other from an LLM recipe is that they'll poison themselves.

This generally fits a notion I've heard expressed repeatedly: today's LLMs are most useful to people who already have some domain expertise, it just makes things faster and easier. Tomorrow's LLMs, that's another question, as you imply.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: