We need a synonym for “fascist” because some people agree that what they do and how they do it is bad, but they are incapable of looking past the word.
Boomers used to tell us to never trust anything online and now they send their life savings to "Brad Pitt"
New generations gets unlimited brain rot delivered through infinite scroll, don't know what a folder is, think everything is "an app" and keep falling for the "technology will free us from work and cure cancer"
There was a sweet spot during which you could grow alongside the internet at a pace that was still manageable and when companies and scammers weren't trying so hard to robbyou from your time money and attention
> By making these fake images ubiquitous we are forcing people to quickly learn that they can't believe what they see on the internet and tracking down sources and deciding who you trust is critically important.
Has this thought process ever worked in real life? I know plenty of seniors who still believe everything that comes out of Facebook, be AI or not, and before that it was the TV, radio, newspapers, etc.
Most people choose to believe, which is why they have a hard time confronting facts.
And not just seniors. I see people of all ages who are perfectly happy to accept artificially generated images and video so long as it plays to their existing biases. My impression is that the majority of humanity is not very skeptical by default, and unwilling to learn.
Yes. People willingly accept made up text (stories) if it fits their world view, and for words we always knew that they could be untrue. Why should it be different for images/audio/video?
Gambia has 70% of service economy. Do you want USA live like Gambia ? With your logic money grows on the trees. Services inside of the country does not produce anything. US doctors able to buy Mercedes cars from Germany because this country still exports windows and iPhones.
> Having AI in the mix could potentially fix the problem(partially).
Any examples?
As far as I understand, claims in the current AI cycle are wildly exaggerated, and sometimes companies rely on sort of circular deals to make revenue appear higher than it actually is, e.g. OpenAI and Microsoft or Nvidia. Wouldn't that mean that AI companies are primed to oversell and underdeliver, effectively making the problem even worse?
On your first question, it is impossible to unlink it from Twitter, since Musk being feverishly active there, and then buying the platform, was the catalyst for a new wave of right wing support for him and his industries.
If you take the claims at face value, then the process was 100% fair and xAI provides the best models and guardrails for processing top secret data at a lower cost, compared to the competition. Personally, I find this unlikely.
We also know that Musk has been cozy with the current administration, and spearheaded the very same “efficiency” campaign at show here.
I think it would be naive to blindly believe Musk and the DOD claims and ignore their common history.
How can it have lost 80% of its valuation when Elon Musk bought it for $44 billion and then sold it to an entirely different entity at a valuation of $45 billion, and then that entity was bought by a completely different third entity for a valuation of $250 billion?
How can it have lost 80% of its valuation when Elon Musk bought it for $44 billion and then sold it to Elon Musk at a valuation of $45 billion, and then that entity was bought by Elon Musk for a valuation of $250 billion?
I think the nuanced take on Joel's rant is this: it was good advice for 26 years. It became slightly less good advice a few months ago. This is a good time to warn overenthuastic people that it’s still good advice in 2026, and to start a discussion about which of its assumptions remain to be true in 2027 and later.
We're not writing code in a computer language any more, we're writing specs in structured English of sufficient clarity that they can be generated from.
> writing specs in structured English of sufficient clarity
What does "sufficient clarity" mean? And is it english expressive enough and free of ambiguities? And who is going to review this process, another LLM, with the same biases and shortcomings?
I code for a living, and so far I'm OK with using LLMs to aid in my day to day job. But I wouldn't trust any LLM to produce code of sufficient quality that I would be comfortable deploying it in production without human review and supervision. And most definitely wouldn't task a LLM to just go and rewrite large parts of a product because of a change of specs.
reply