Lots of references to the term "progress" but no mention that "AI" seems to freeze it at a moment in time. It is only backward-looking. (People make jokes about asking ChatGPTx questions about events that happened after 20xx.) Predicting the next token is based on patterns that have come before, found in the material upon which a model has been trained. There is an ability to create new (re)combinations of past material, and this is quite useful, but there is no ability to create truly new material, something that has never been seen before. Creating new material is the essence of "progress", IMO. It is an amazing thing that people do, not computers. Originality. Of course, people might use computers to help them do it.
From a recent NewYorker article that reached HN front page:
"This has been mocked as simply an advanced form of autofill. But, when I write a sentence, I, too, am trying to guess the best next word. It just doesn't feel especially "auto." One big difference is that, since I fancy myself a writer, I am trying to avoid, wherever possible, the statistically most common solution."
> Even worse is what the lawsuit calls "regurgitation," which is when OpenAI spits out text that matches Times articles verbatim. The Times provides 100 examples of such "regurgitation" in the lawsuit. In its rebuttal, OpenAI said that regurgitation is a "rare bug" that the company is "working to drive to zero." It also claims that the Times "intentionally manipulated prompts" to get this to happen and "cherry-picked their examples from many attempts."
So long as an LLM is trained on unlicensed content, this risk can never be driven to zero, because LLMs are not provably correct against regurgitation. OpenAI's rebuttal is marketing that counts on persuading the audience they its engineers can write bug-free software.
It's not just Microsoft and OpenAI that are vulnerable to lawsuits when an LLM generates content that may be assessed as a "derivative work" in a court of law — it's any OpenAI user who republishes the derivative work.
I think it would be just to kill OpenAI if it is deemed to have violated copyright. Ownership is the cornerstone of free enterprise, and if OpenAI has stolen some valuable intellectual property, well, it's guilty of theft
Greed is way more powerful. Political power is downstream from money power. Wall Street is not going to let his shiny new toy to go away.
We will happily walk this path until we all live in a giant AI-enabled global feudal regime where even the probability of an revolution will be nil, because the AI will know who are the potential revolutionary leaders even before they think about a revolution.
From a recent NewYorker article that reached HN front page:
"This has been mocked as simply an advanced form of autofill. But, when I write a sentence, I, too, am trying to guess the best next word. It just doesn't feel especially "auto." One big difference is that, since I fancy myself a writer, I am trying to avoid, wherever possible, the statistically most common solution."
https://www.newyorker.com/magazine/2024/01/22/who-owns-this-...