Hacker News new | past | comments | ask | show | jobs | submit login

I think it's a incredibly important question to be able to explain how an AI creating novel work is different then a Human creating novel work. Why does this grind your gears?



To me it seems to imply a stunningly nihilistic point of view vis-a-vis human writing (or art, where it also gets repeated a lot here).

It seems almost definitionally obvious that what an LLM does is not the same as what a human does – both on the basis that if all human writing were merely done via blending together other writing we had seen in the past, it would appear to be impossible for us to have developed written communication in the first place, and on the basis that when I write something, I mean something I am then attempting to communicate. An LLM never means to communicate anything, there is no there there; it simply reproduces the most likely tokens in response to a prompt.

To insist that we're just a bunch of walking, breathing prompt-reproducers essentially seems like it's rooted in a belief that we have no interior lives, and that meaning in writing or art is utterly illusory.


see: http://www.jaronlanier.com/zombie.html

It’s not said very much, but this style of dehumanization is really corrosive in a way that directly benefits the worst forms of human governments and structures, and this fact goes i think genuinely unrecognized too often in tech-land.

if we really are p-zombies, then those people aren’t really suffering, right, so it’s fine …


> To insist that we're just a bunch of walking, breathing prompt-reproducers essentially seems like it's rooted in a belief that we have no interior lives, and that meaning in writing or art is utterly illusory

Let’s assume humans are not just evolved pattern machines for a second. A human can still do a completely non profound work of art following a prompt to draw X in the style of Y. And that’s ok. So why can a machine not do the same?

Surely not everything a human does is intrinsically profound.


This is not just moving but fully inverting the goal posts. Nobody at any point was disputing that a machine can’t ape non-profound or rote or meaningless human output.

The original discussion was precisely an objection to the attitude underlying "How is *GPT taking in data and producing an output different than a human learning a skill and making prose/code/art?" and the answer is right in your premise - not everything a human does is not profound. A human can intend to mean something with prose or art, even if not all prose or art means something — but any meaning we see in ChatGPT’s output is essentially pareidolia.


I disagree. I don’t care much about what is profound. I think most of it is not. Things that we call profound are really just astute observations of patterns in the real world, and there’s nothing wrong with that.

However profundity doesn’t need to factor into the debate of whether ai should or should not be allowed to train on things. If we allow humans to copy things, then Humans ought to be allowed to copy things with dumb non sentient ai too.

Ai in the current state is just a tool, much like a paint brush.

Cue the inevitable appeal to copying exact works, rebuttals to training on human painted mimicries and then bam, you’ve got the authors special style learned by the model with extra steps.

It’s annoying and pointless.

Art that is merely visually intriguing is not very interesting. If an artist makes something without a particular idea to communicate, it’s just aesthetics. It is not profound. If an artist has an idea and creates a work that represents it, then maybe it is profound. But it doesn’t matter if it was made with paint or a computer. The idea is the profound thing. AI is not sentient. It’s still the user.

The appeals to pareidolia are wrong. Synthesis of ideas from past data is natural. But the AI does not choose things. What you’re really complaining about is creation of art from apparent randomness. Not the AI model alone but monkeys on a typewriter getting something compelling from the AI.

What do we do when the tools are so powerful that a monkey creates a profound work that the monkey doesn’t understand? Shrug.


So your first 6 paragraphs have nothing to do with anything I wrote – you're just arguing with some other post you've made up in your head.

> The appeals to pareidolia are wrong. Synthesis of ideas from past data is natural. But the AI does not choose things. What you’re really complaining about is creation of art from apparent randomness. Not the AI model alone but monkeys on a typewriter getting something compelling from the AI.

No, you've failed to understand what I'm saying entirely (because, again, you've responded to some other post that only exists in your mind).

What I'm talking about is intention and its relationship to meaning, in the philosophical sense (and not... copyright or whatever it is you're rambling on about).

Witness: when ChatGPT famously mis-asserts the number of characters in a word (say, that there are twelve characters in the word "thirteen"), it's not that it's trying and failing to count, because it's confused by letter forms or its attention wanders like a 3 year old or its internal representation of countable sets glitches around the number 8 or something – it never counted anything at all, it's simply the case that twelve is the most statistically likely set of tokens corresponding to that input prompt per its training set. And when it produces a factually correct result (say, "there are 81 words in the first sentence of the declaration of independence"), it produces it for exactly the same reason – not because it has counted the words and formed an internal representation and intends to mean its internal understanding, but simply because 81 is the most statistically likely set of tokens corresponding to that prompt per its training set.

And yet when it produces these correct results, people ooh and aah over how "smart" it is, how much it has "understood", how "good it is at counting; better than my son!", and when it produces incorrect results people deride it as dumb and so forth, and and all of this, all of this, is pareidolia; it is neither smart in the one case nor dumb in the other, it does not learn in the sense the word is normally used, it does no counting. We're anthropomorphizing an algorithm that is doing nothing like what we imagine it to do, because we mistake the statistical order in its expressions for the presence of a meaning intended by those expressions. It's all projection on our end.


Your opinion is not the only one that I’m addressing. I clearly understand your point which I address by:

> What you’re really complaining about is creation of art from apparent randomness. Not the AI model alone but monkeys on a typewriter getting something compelling from the AI.

You accuse others of anthrpormorphisizing the tool but you do the same. Art created with Chat GPT is not created by Chat GPT. It is created by a human using a chat GPT. There is no intrinsic limitation on the profundity of art created using chat GPT or other algorithms.

It’s like complaining that paint is stupid. A comment that is largely irrelevant to the artistic merit of paintings.


> Art created with Chat GPT is not created by Chat GPT. It is created by a human using a chat GPT.

Sure, in approximately the same way that the CEO of Sunrise is an animator. Pull the other one, it's got bells on.

Yours is an utterly incoherent interpretation; when ChatGPT outputs that there are 12 characters in the word 13, I have not "created the meaning" 12. You're just fixated on this "actually I am le real artist for typing prompts" axe you want to grind, but it has fuck all to do with anything I'm saying.


You are cherry picking a dumb example. We don’t shit on paint when someone makes poop. What you should be cherry picking is examples of art that people would consider profound upon seeing it. Otherwise you’ll simply look like a dumbass when you’re implying that only trash will be generated and then beautiful stuff is generated. The fact that current ai has dumb interpretations on things is hardly a fundamental quality of generative algos.

My statement is simply that the algo’s are a tool. And tools can be used to make good art.


I suspect it's the same reason it grinds my gears that it's called a "learning rate" instead of "step size" in ML.

Not only is it less precise term, but it gives the wrong implications.

Personally, I'm on the side of releasing training data. Let everybody train on everything. But it's always felt absurd to say that the ML models are "learning" things.

But hey, none of us know how learning works anyway, right? So maybe it's not such a big distinction. As you say, none of us can pinpoint why a model isn't learning vs why we are.


I think the problem with these words is that the vernacular has different meanings with the common lexicon. But this is quite true for any field. "Field" is even a good example of this as mathematicians use it in a drastically different way than I just used it now. This can make people think they understand technicals more than they do. But if you're making the argument that ML needs to learn more math and needs more rigor, then I'd defend that claim. It is a personal pet peeve of min (fuck man, how often I have to explain to my research group what a covariance matrix is and why it is essential to diffusion is absurd).


A lot of what we see is cargo cult engineering and not fundamental research in ML. Most of it is applied research or engineering - there is a little bit of fundamental research that actually expands our own knowledge about how things work and what their limits are, while applied science keeps marching on (maybe towards funamentally impossible goals).


I'm not the parent commenter, but it grinds my gears because the answer is obvious. Humans value human creativity because of emotion, shared experience, and the value we place on each other as humans.


It's just as obvious to me that humans do not actually care where creative works they appreciate come from.

Some of my favorite creative works came from some awful people and others came from algorithms.

I don't care. It does not effect the works or the way in which the works effect me.


Awful people are humans, with human experiences.

All algorithms are made by humans and/or process human input.

And besides, I never said creativity is a requirement for appreciation. I appreciate things in nature regardless of the fact they weren’t the result of creativity.


The fashion industry and pretty much the concept and existence of luxury brands are counter to this claim.


Because the implied answer is "it isn't different". That grinds our gears (for various values of "our") because 1) it assumes an answer to what is, at best, an open question, and 2) we think it assumes the wrong answer.

If asked in good faith (not assuming the answer), I can agree that it's an important question.


ChatGPT can already replicate the bugs in an average shop, I would say it is becoming post-human if we remove its ability to generate nonsense.


> Why does this grind your gears?

For the same reason it ground Kurt Cobain's gears.

"He knows all our pretty songs, and he likes to sing along. But he don't know what it means"

I always thought that was a bit condescending but it applies perfectly to chatgpt.


> I think it's a incredibly important question

Why do you think that?

If we had an widely accepted answer, how would the world be different?


Because we most likely could use that information to both gain greater insight into how humans learn and most importantly innovate. And also we can strive to create better AI based on the principles discovered.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: