Hacker News new | past | comments | ask | show | jobs | submit login
The AI Lie (muddycolors.com)
38 points by labrador on April 25, 2024 | hide | past | favorite | 23 comments



> It has become standard to describe A.I. as a tool. I argue that this framing is incorrect. It does not aid in the completion of a task. It completes the task for you.

A modern camera automates a lot of tasks that were manual for older cameras, which in turn automate a lot of tasks that would be involved painting the scene by hand.

Sometimes it's useful to work at the higher level allowed by automation, and sometimes it's useful to have fine-grained creative control. These aren't mutually exclusive - you can/should mix approaches where appropriate.

I think the author sees "AI" as solely "a button one can push that spits out images", which can be the case, but there's also image-to-image, ControlNets/T2I adapters, LoRAs, inpainting, etc. A lot of generative AI image creation workflows involve an initial solid-color sketch followed by image-to-image refining with a low diffusion strength, which allows exerting an arbitrary level of control.


AI is just a way to market software. In the 70s and 80s we had "expert systems". Knowledge packed into software. Now when you buy hardware such as the new 3d xray systems for airports it can use high and low energy xray and density data to guess the substance of objects. This is an expert system, but it's trivialized to the point where it's not labelled as something special, it's just a feature.

This is a natural process. Hype up new stuff as "intelligent", "expert", "AI". Down the line it will be just another piece of software functionality. Doesn't mean anything special.

Right now "generative AI" is being heavily subsidized to gain market share. All these prompts generating text and image cost A LOT of money in training, inference, etc. If we were charged the real cost, it wouldn't seem like such a magical thing.


I dunno, it's hard to agree that the outputs contain "no emotion" when you see these examples (scroll down):

https://arstechnica.com/information-technology/2023/09/opena...

In particular, the potato king, and the paper girl and cat have warmth and charm. (Do I have to say "apparent" warmth and charm?) The porcelain woman is beautiful and a little disturbing. I still have these responses to the art whether you tell me a human made it or not.

Is the important internal state in the viewer, or in the artist? (actual question!)

Likewise "It simply runs a program... It produces averages." don't match my emotional response to those images.

Simultaneously with this response, I do feel for artists, and the rest of us who are next on the list.


Is anyone even making the argument that AI replaces real art?

The thing is there's an ocean of use cases of art where people don't really scrutinize the intentions and success in those fields is really just defined as the viewer going "oh that was cool."

Things like marketing materials, supporting assets in animations, stock photos, concert visuals.. hardly anyone is really taking a microscope to these artworks.


> Is anyone even making the argument that AI replaces real art?

I've never seen a case of this. It's an absurd argument. It's like saying AI replaces human AI scientists. AI needs humans to create new styles of art, just like it needs AI scientists to improve. We are a long way from AI not needing humans imho

The art at Burning Man is not winning awards (afaik) but people love it. There's a lot of snobbery in the art world. I think the best example of that is when Truman Capote didn't want to meet Andy Warhol because he was a window decorator and a commercial artist. I reject this kind of thinking.

Then there's the money in the art world which is a other can of worms. I have no idea how AI will effect this, but I suspect there is some fear in the art world about this. It will be interesting to see.


"It has become standard to describe A.I. as a tool. I argue that this framing is incorrect. It does not aid in the completion of a task. It completes the task for you. A.I. is a service. You cede control and decisions to an A.I. in the way you might to an independent contractor hired to do a job that you do not want to or are unable to do. This is important to how using A.I. in a creative workflow will influence your end result. You are, at best, taking on a collaborator. And this collaborator happens to be a mindless average aggregate of data."

As a non-famous, non-professional artist this seems like a snobby take. Everyone of these arguments is logically inconsistent with what I know and have experienced using LLM's, stable diffusion, DALL-E, and other generative machine learning algorithms. I wonder what the author would say about plotter artists or CGI artists... would it sound like his take on generative art models?

I guess the only point I would really like to make is this; A closed mind is already defeated.


I agree with the closed mind thing, except it appears your mind is closed.

This artist is saying that it’s not a tool because it disconnects him from his task, rather than helps him invest the task with his own agency. This is not logically inconsistent.

If it is not consistent with your experience, then it suggests there can be different experiences. But perhaps you don’t care as much about your own contribution to your work, because, as you say, you aren’t “famous.” All that means is that what makes you special is not as valued by strangers as what makes him special. How is that an argument to make yourself less special?

His argument is completely consistent with my experience and feelings about using AI. I am in a completely different field.


I disagree with his notion that I just cede control of my output to AI.

I am using LLMs to help me improve my writing. I was asking for revisions but I found myself just copying what the LLM is outputting

Instead, I told the LLM to analysis my writing and encourage me to create more effective sentences. Instead of generating the output for me, the LLM help me think of better revisions of my writing.


It's consistent if the author comes out and predicts other things non-tool (i.e. agentic) AI are likely to do in the future. Otherwise it's a suspiciously specific argument.


Yeah, the bit that gets me is the assumption that an artist just _yields_ to the AI's choices.

The author says in the article "my own hand, the single most valuable asset I possess", but I'd say that much greater is the artist's eye——the artist doesn't just accept every line that flows out of their pencil, but knows what to throw out, what to redraw.

The infamous stereotype of AI art, the hand with too many fingers, is the outcome of a lot of people being given an artificially skilled hand without the skilled eye to go with it...

The AI artist who doesn't just cede to the AI, though, is bringing their own hand and eye to the work, not accepting the mere output of a prompt, but actively remaking it as an expression of their own craft and vision.


With the difference that they have to invoke incantation instead of using their invaluable hands :’)

And I guess that could create the much dreaded feeling of yielding up their artistic agency.


I've always wondered why people didn't edit out the extra fingers and legs. I haven't played with it myself - is post editing difficult?


Not really, you erase and then inpaint, or replace the offending area with multicolor noise, put a bit of multicolor noise over the whole thing, then do an image-to-image run on that.


You can't stop this train.


Ah yes, the ol' "you can't put the genie back in the bottle" argument—which is completely nonsensical.

Everyone can just decide not to use these tools. Simple.

Put another way, take a look at all the cities around the world making great strides to limit motor vehicle usage…and in some areas outright banning cars. Decades ago, that would have been unthinkable! Today, it's real and it's happening more and more.

Never underestimate the ability for society to wake up from collective amnesia at various junctures and realize, y'know, actually these technologies suck and we can reach for better ones.


> Everyone can just decide not to use these tools. Simple.

Stated like this it feels like parody of your position. We aren't going to have everyone just deciding to stop using the tools.

We could plausibly get legislation that makes it less attractive for people to make use of these tools in their work, resulting in illustration/translation/programming/etc. still requiring the same amount of human labor for a given output. I don't think any governments are in a particular rush to do this currently, but as you say sentiments can change.


I genuinely don't understand this seeming inability to call for effective activism.

We get people to stop using these tools because we convince them it's wrong to do so. Society becomes convinced of things all the time—norms change, what was once considered fine (let's dump all our toxic waste in the river next to our town! what could go wrong?) suddenly becomes verboten.

I cannot for the life of me grasp nihilistic resignation in the face of Big Tech declaring war on human creativity and flourishing. People, you must fight with every fiber left in your being!


> I cannot for the life of me grasp nihilistic resignation [...]

I agree that change is possible through a hard-fought battle, both legally and socially. On the other hand, "Everyone can just decide not to use these tools. Simple." sounded like a naive expectation to the extent that I'd expect to read it as a strawperson set up by someone making the opposing point that unanimous voluntary relinquishing of easily-available tools, against self-interest, is unrealistic.

I'm also relatively optimistic about the future machine learning. If I thought the change you wanted was entirely impossible, it'd probably be "smug contentment" rather than "nihilistic resignation".


CO2 emissions are ripping and will continue to do so in China and India. You can't stop that train, either. https://ourworldindata.org/grapher/annual-co2-emissions-per-...


A good deep delving of the issues.


Delve is a perfectly good word. I don't know why it caused a stir recently.


I think this is missing the point and not seeing where the technology is going. For generative art, right now it's basically prompt in -> image out, with no iterative process. But I can tell you that in coding, which is a creative endeavor, it is not like that at all. It is highly iterative. You go back and forth shaping the implementation, exploring ideas, veering off into tangents about some obscure encoding question, or this or that. The point is, coding with a GPT is an iterative, creative process, where you as the operator bring all of the human elements like vision, purpose, ingenuity (the truth is: the solution usually has to come from you when working with a GPT, but the GPT is invaluable in the exploratory / prototyping / sounboarding process).

There is a famous video of Picasso painting a fish on a backlit screen. If you imagine what is happening in his process, it is iterative: he envisions a shape, and then his brain communicates to his hands to produces the shape; then he looks at the shape and his brain envisions another shape to build upon the existing shape. This happens over and over. You can see the process at work in the video. It is iterative, there is a feedback loop, there is the human element -- that is creativity.

Now imagine there is someone who has a vibrant mind like Picasso's, but he has a bodily injury such that he cannot use his limbs. Imagine he gets a Neuralink implant that enables him to interface his mind with a generative AI that is able to represent on a screen what he envisions in his mind? This takes the place of Picasso's brain communicating to his own hands to put things on the screen. Instead this translation of vision to reality is happening THROUGH a generative AI.

If you think about what current generative AI is doing, it is already a crude form of that: a human envisions something -> this becomes a prompt -> the generative AI puts something on the screen in response to that prompt. Now fast forward ten or twenty years to when the process is fluid and iterative, and we all have neuralinks, and we can iteratively think representations into being via generative AI, tweaking every aspect of the final representation, with the totality of prior visual representation available to us like tools and palettes in Photoshop.

You just need to think ahead to where these technologies can go. I think the people who are belittling generative AI are simply not thinking far enough ahead. Like if you saw Steve Wozniak's first breadboard and couldn't envision an iPhone with a retina display in your pocket.


There are very fast text to image or image to image models that run interactively today. A few take well under a second to generate based on the next few characters typed or lines sketched. That's not really new. I built a somewhat slower version months ago. And I think I saw an image generator recently that was over 30 frames per second.

Also Adobe Photoshop has already incorporated image generation.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: