Expecting internet celebrities to have "authentic" interactions with you is just parasocial relationship. It always has been, and gen-AI just reveals it.
Yeah, and it doesn't even change it - it just makes it more appealing for news to run with a story. Of course, the focus is on AI/authenticity angle; I'd thought they'd give some space for the plight of brand agency marketers (and cheap labor they subcontract to) being pushed out of their jobs by LLMs, but I guess that would require first explaining to people that it was those marketers who people were having their "relationships" with all along. But that's just too sad and complex to report on; it's more of an op ed stuff anyway.
A fisherman was relaxing on a sunny beach, enjoying the day with his fishing line in the water. A businessman, stressed from work, walked by and criticized him for not working harder.
"If you worked more, you'd catch more fish," the businessman said.
"And what would my reward be?" asked the fisherman with a smile.
"You could earn money, buy bigger nets, and catch even more fish!" the businessman replied.
"And then what?" the fisherman asked again.
"Then, you could buy a boat and catch even larger hauls!" said the businessman.
"And after that?"
"You could buy more boats, hire a crew, and eventually own a fleet, freeing you to relax forever!"
The fisherman, still smiling, replied, "But isn't that what I'm already doing?"
I think that might have been their point. People moving work to AI so they can "relax" by working on more complicated technical matters (or more AI) are the businessmen, and the meteorologists just chilling out predicting the weather as best as they can with science are the fishermen.
Edit: Just saw their reply to you, so maybe I was wrong about the parable coming across wrong.
I mean, yeah, if you want 90% of your relaxation time for the rest of your life to be while you're fishing, that's fine.
For the businessman to assume otherwise is not outlandish. The idea of an entire fleet is overblown, but having "fuck you money" is a pretty nice goal.
-
Also I don't see how this applies to meteorologists? Which part is the "working more" aspect?
OpenAI has completely pivoted to a Product company (vs a Model / API company)
The minute they switch to API is the time their userbase realizes that OpenAI neither has the best model nor the most cost-effective one nor the fastest one.
So, their business strategy is subscription and bundle a bunch of products and market it to the mass.
They have a brand "chatGPT" recognition and they'll milk that.
Sam is more of a Steve Jobs than a Bill Gates personality or at least wannabe
You may think Anthropic is different? It's the same UI + API. The difference is they don't create free accounts for personal emails. Sort of shadow banning. However they accepted my work email for free account and personal email for payed API. May be that's just me, I tried several personals.
As for model's quality they are both impressive. I haven't run meaningful comparison tests.
I don't know what you imply by juxtoposing Jobs vs Gates. From what I know, Apple successfully marketed their hardware as a lifestyle. I don't think OpenAI is anything like that.
In turns of future proofing it's not even comparable. SWEs have negative protection. Mostly not unionized. You don't even need a license to write code.
If the government says we're going to launch 100 programming schools and flood the market with 100,000 coders in next 4 years, no one bats an eye. If it's for doctors you can see what happened in Korea[1].
The problem isn't whether we should regulate AI. It's whether it's even possible to regulate them without causing significant turmoil and damage to the society.
It's not hyperbole. Hunyuan was released before Sora. So regulating Sora does absolutely nothing unless you can regulate Hunyuan, which is 1) open source and 2) made by a China company.
How do we expect the US govt to regulate that? Threatening sanction China unless they stop doing AI research???
Easy-peasy. Just require all software to be cryptographically signed, with a trusted chain that leads to a government-vetted author, and make that author responsible for the wrongdoings of that software's users.
We're most of the way there with "our" locked-down, walled-garden pocket supercomputers. Just extend that breadth and bring it to the rest of computing using the force of law.
---
Can I hear someone saying something like "That will never work!"?
Perhaps we should meditate upon that before we leap into any new age of regulation.
That's exactly the kind of logical conclusion I had hoped for someone here to reach in this bizarre sea of emotional pleas.
After over two decades of careful preparation, we're the stroke of a legislative pen away from having all of the software on our computers regulated by our friends in the government.
It's not even a slippery slope argument. In order to be effective, "We must regulate AI!" means the same thing as "We must regulate computer software!"
The two things are so identical that they're not even so different as two sides of the same coin are.
(Be careful what you wish for; you might just get it.)
- OpenAI added a filter for his name
You: so we don't know why the filter was added
I know this is HN, but come on.
reply