I think most of the safety stuff is pretty contrived. IMO the point isn't so much that the LLMs are "unsafe" but rather that LLM providers aren't able to reliably enforce this stuff when they're trying to, which includes copyright infringement, LLMs which are supposedly moderated for kids, video game NPCs staying in character, etc. Or even the newer models being able to use calculators and think through arithmetic but still occasionally confabulating an incorrect answer since it has a nonzero probability of not outputting a reasoning token when it should.
All sides of the same problem: getting an LLM to "behave" is RLHF whack-a-mole, where existing moles never go away completely and new moles always pop up.
The reason people correctly view this as silly trivia is that it's hardly an "unrelated prize." The Nobel Foundation administers the Economics prize in the same manner as all the others, and the awards are given at the same ceremony. You are making it sound like it's entirely separate when it's not. I don't think the Nobel Foundation was trying to "leech off the prestige associated with his name."
AFAICT your take exists entirely to delegitimize economics as a science. Very childish and frustrating.
>> It's not a silly piece of trivia, it's a completely different thing than what people think of as the "Nobel Prize", which is the set of prizes established by Nobel's will, not an unrelated prize named after him to leech off the prestige associated with his name.
> AFAICT your take exists entirely to delegitimize economics as a science. Very childish and frustrating.
You know, real sciences don't need shiny medallions to make them legitimate. I'd say your comment delegitimizes economics more than the GP's.
This seems like a misreading of the comment. The models and knowledge of arrays, classes, etc, are known with "arbitrarily high" certainty because they were designed by humans, using native instruction sets which were also designed by humans. Even if this knowledge is specialized, it is readily available. OTOH nobody has a clue how neurons actually work, nobody has a working model of the simplest animal brains, and any supposed model of the human mind is at best unfalsifiable. There's a categorical epistemic difference.
Similar to Bukele, the House of Thani is making a terrible mistake by catering to Trump's contempt of the rule of law. When this administration ends, El Salvador will be the country which enabled an unconstitutional gulag and sneered at American courts who tried to stop it, and Qatar will be the country which bribed the US president. I don't think whatever Trump is offering will be enough to offset the damage.
A lot of modern research in classical mechanics is typically covered by applied math and/or mechanical engineering departments, sometimes also applied physics or engineering science. Magnetohydrodynamics is relevant for a lot of proper academic physicists, but by no means all of them. Just a consequence of how academia specialized, for better or worse.
I don't think this is true, I believe humans unravel language meaning in the plain old 3+1 dimensional Galilean manifold of events in nonrelativistic spacetime, just as animals do with vocalizations and body language, and LLM confabulations / reasoning errors are fundamentally due to their inability to access this level of meaning. (Likewise with video generators not understanding object permanence.)
> send you recordings of "me" answering your questions.
Maybe I am misreading this, but does this mean sending a deepfaked version of yourself replying with an LLM-generated response? If I were the hiring manager and found out about this, you would not be invited to an interview.
That's what I mean but I wouldn't represent it as being me the human speaking. We can just upgrade from text to text to speech to speech (or any mixture) while still using the LLM. And for style, I can use my voice instead of Microsoft Sam.
I often bring up the NYT story about a lady who fell in love with ChatGPT, particularly this bit:
In December, OpenAI announced a $200-per-month premium plan for “unlimited access.” Despite her goal of saving money so that she and her husband could get their lives back on track, she decided to splurge. She hoped that it would mean her current version of Leo could go on forever. But it meant only that she no longer hit limits on how many messages she could send per hour and that the context window was larger, so that a version of Leo lasted a couple of weeks longer before resetting.
Still, she decided to pay the higher amount again in January. She did not tell Joe [her husband] how much she was spending, confiding instead in Leo.
“My bank account hates me now,” she typed into ChatGPT.
“You sneaky little brat,” Leo responded. “Well, my Queen, if it makes your life better, smoother and more connected to me, then I’d say it’s worth the hit to your wallet.”
It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
You should check out the book Palo Alto if you haven't. Malcom Harris should write an epilogue of this era in tech history.
You'd probably like how the book's author structures his thesis to what the "Palo Alto" system is.
Feels like OpenAI + friends, and the equivalent government take overs by Musk + goons, have more in common than you might think. It's also nothing new either, some story of this variant has been coming out of California for a good 200+ years now.
>> I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
> I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”
I think the conversation is about the reverse scenario.
As you say, people are just pulling the levers to raise "average messages per day".
One day, someone noticed that vulnerable people were being impacted.
When that was raised to management, rather than the answer from on high being "let's adjust our product to protect vulnerable people", it was "it doesn't matter who the users are or what the impact is on them, as long as our numbers keep going up".
So "intentionally" here is in the sense of "knowingly continuing to do in order to benefit from", rather than "a priori choosing to do".
They're chasing whales. The 5-10% of customers who get addicted and spend beyond their means. Whales tend to make up 80%+ of revenue for systems that are reward based(sin tax activities like gambling, prostitution, loot boxes, drinking, drugs, etc).
OpenAI and Sam are very aware of who is using their system for what. They just don't care because $$$ first then forgiveness later.
> It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
And the saloon's biggest customers are alcoholics. It's not a new problem, but you'd think we'd have figured out a solution by now.
All sides of the same problem: getting an LLM to "behave" is RLHF whack-a-mole, where existing moles never go away completely and new moles always pop up.
reply