When Altman says "OpenAI is not for sale", is there any sense in which that is not a lie?
As part of the for-profit transition he is selling all of the non-profit's assets. Musk's group has offered to buy them, and Altman has declined, but he is still planning on selling them, no?
Sure they can. They're a non-profit, they don't have a fiduciary duty. It may cause some ramifications for the taxes and accounting but they absolutely can since they don't have shareholders.
OAI's fiduciary duty is to their charitable mission. If selling to Elon for $97b jeopardizes the mission, then so would selling to preferred investors for $40b (as they would in turn face immense pressure by shareholders to realize an instant $57b gain by reselling to Musk!)
That's what this whole thing is about. Musk's offer will probably not be accepted, and he knows it. The purpose is to throw a wrench in OAI's plan to sell to insiders at a heavy discount, possibly making it impossible for the nonprofit to become a for-profit.
If this offer forces the preferred investments to cough up an extra $60b for the nonprofit, that's fantastic for the nonprofit mission.
Regardless of how you feel about the potential consequences of Musk being a significant shareholder of OpenAI, the alternative is not to let sama and other preferred investors buy the non-profit's stake at a >50% discount as they had planned to do before this offer.
Except that if you’re a high school student who is using a calculator for stuff most if not all of the readers on this site would do in your head, having a calculator that ignores precedence just makes things that much harder.
A high school student should already know the difference between a dumb calculator that can only show a single number and a smart calculator that lets you to write a full line while using () before pressing =, having used both already by middle school.
The MUMPS database is wild. When I was working in MUMPS, it was so easy and fun to whip up an internal tool to share with my coworkers. You don't have to give any special thought at all to persistence, so you're able to stay in the flow of thinking about your business logic.
But as you said, the language itself is almost unbearable to use.
I'd love to use HTMX at work. Sadly the security folks would probably balk at checking in JS code that uses eval(), even though you can disable eval at runtime in the HTMX config.
I thought about writing a script to remove all references to eval (and all tests relying on eval), but at that point it would probably be easier to just rewrite the library.
This is hilariously a result of China's view. China's claim is that Taiwan is part of the still civil warring China, and should be re-integrated. Part of that strategy is an insistence that OTHER countries need to parrot this claim, including countries like the US that recognize Taiwan as an independent entity. The "oneness" of China is vitally important.
The claimed story is that Taiwan is worried if they abandon the "one warring China" policy and openly state they are Independent, that will aggravate China and cause them to push their claim harder, and maybe lead to war.
Taiwan having such a policy is directly because that's the policy China wants everyone else to claim. Notably, Taiwan has taken zero effort to produce a military capable of doing any over-water invasions, which would be absolutely necessary if they actually wanted to do that. Unless you think Taiwan would rely on the USA to invade China for it, which I do not think the US ever wants to do. Our explicit strategy is to own all the islands around China (including Taiwan) and basically blockade China in all but name.
China meanwhile DOES build a military to take over Taiwan, explicitly, including systems designed to sink our carriers and practice targets in the desert. Strictly speaking I'm not concerned about China wanting a viable means to sink our Navy, as China doesn't want to starve to death if we could blockade them, but the buildup around the capability to take Taiwan betrays that it is not a defensive posture.
No it really isn't, and it's certainly not hilarious. Taiwan's position is historic. The government of Taiwan literally used to be the government of the mainland.
2017, seven Google employees invent the transformer architecture and publish a paper. Google's investing heavily into ML, with their own custom 'TPU' chips and their 'Tensorflow' ML framework.
2019ish, Google has an internal chatbot they decide to do absolutely nothing with. Some idiot tells the press it's sentient, and they fire him.
2022, ChatGPT launches. It proves really powerful, a product loads of individuals and businesses are ready to pay for, and the value of the company skyrockets.
2023, none of the seven Transformer paper authors are at Google any more. Google rushes out Bard. Turns out they don't have a sentient super-intelligence after all. In fact it's badly received enough they end up needing to rebrand it a few months later.
Classic tortoise-and-hare situation - Google spent 5 years napping, then had to sprint flat out just to take third place.
Have you ever listened to what Lemoine said? Sure, we have no proof and he's under NDA so probably no documentation that can be scrutinized. But still, his alleged chats were chilling in some ways. They probably didn't except him to go public and so they had to spend years nerfing their chat bot before launching it as a product and that's why it sucks: They're too careful and have too much to lose in bonuses. Google will probably lose some market share over the next few years before they're getting nervous to put someone with a longer leash into the CEO seat.
I recall this particular person seeming like a bit of a crackpot on internal forms before (and for reasons unrelated to) the Lamda chatbot. I didn't know him personally and don't even remember the details anymore but it made an impression that wasn't dispelled by his reaction to a new model passing the turing test.
For the tortoise to win in technology it needs to be dedicated to relentlessly polishing and improving something over a long period to make the best product experience. Those aren't traits I particularly associate with Google unfortunately.
It is easier to judge revenue or market share than technical quality of the models itself objectively , they are relatively close to each other functionally .
In the market, I would say both Anthropic and openAI have been able to do that much better than traditional big tech including Google.
OpenAI is the market leader by far with the most name recognition. Google was the last to market. Its initial release of Gemini was a total flop because of the meme "Use elmer's glue on pizza to keep the cheese on". It has finally become more consistent, and it manages to compete with other models though I never see anyone recommending Gemini first.
All of these companies are in the red, but OpenAI has the most revenue.
This is a bit of a tangent, but I don't think OpenAI's brand is all that durable. You can see that Perplexity.AI has been gaining rapidly. At this point they have half as much search traffic as OpenAI:
As part of the for-profit transition he is selling all of the non-profit's assets. Musk's group has offered to buy them, and Altman has declined, but he is still planning on selling them, no?
reply