> The strange contortions of A.I. hands make me feel a sense of anticipatory nostalgia, for a future when the technology inevitably improves and we will look back on such flaws as a kitschy relic of the “early A.I.” era, the way grainy digital-camera photos are redolent of the two-thousands.
This will be true for most AI "failures". People love to harp on some minor issue some new generative AI has in edge cases and act like it's some insurmountable obstacle that will keep humans superior forever: "haha look at the AI try to make a hand what an idiot"
As humans become more and more irrelevant, we will cling to these minor failures more and more to preserve our egos
There are cheap optimists and cheap pessimists. Both amount to trolls and don’t particularly need to be fed or heeded.
The truth is we don’t know how far today’s breakthroughs in AI will take us. There’s quite a lot of headroom to explore, but history has shown that there are always some brick walls as well.
The rich discussions can be good, but the cheap flamewars are exhausting.
> The truth is we don’t know how far today’s breakthroughs in AI will take us
That's the central question. So far, AI has proven great at ingesting huge amounts of "prior art" and producing variations on the themes it has learnt (more or less) according to the prompts you give it, but it consistently fails to grasp the underlying rules (gravity, perspective, anatomy etc.) that are supposed to govern the images it generates. So it's better for "surrealist" art than for images that are supposed to be realistic...
The surrealistic manifesto by Breton excludes AI, since it has no thought:
"Psychic automatism in its pure state, by which one proposes to express—verbally, by means of the written word, or in any other manner—the actual functioning of thought. Dictated by thought, in the absence of any control exercised by reason, exempt from any aesthetic or moral concern."
I don't even see the position you are arguing against recently around here. But I see lots of comments similar to yours, so what do you have to add?
Saying that intelligent machines are impossible because of hand-wavy metaphysical reasons (or vanity as you claim) is completely different from saying that this technique can or can't be improved to do things that it can't now.
Jet turbines definitely can and have been improved to exceed the capabilities of birds, and it surely doesn't violate the laws of physics to build a machine that flies by flapping wings, but one technology doesn't inherently lead to the other.
It's not a choice of either unlimited improvement or an insurmountable obstacle if improvements are going in a different direction that what's being considered.
People love the idea of Superman because he can do everything. I think there is a psychological need to have a machine that can do everything and these AI tools feel like we’re not far off at times. Close to the supernatural.
I think there is some deep psychology at play that makes us want to believe we’re about to solve all the things we’re insecure about.
Being good at art, being good at speaking or writing, handling the idea of mortality, being great at maths.
We’ve decided that we want machines that can outdraw everyone, in all cases, with zero limitations, and we might be, but we’re also happy to overlook some of the floors and setbacks of such systems to believe there we’ve solved art.
As someone who’s played with these tools and as someone who has done a lot of illustrating in my life, as a hobby, I can still tell you that there ares setbacks, which are, these tools don’t read minds…I still don’t get to produce what’s in my minds eye. I can come close but it is a different product that’s produced. The pros are they do make some great art.
Not around here, the "Ai will never replace artists" sentiment due to the failures with hands is much more frequent on Twitter among artists. Not that I disagree with their frustration, but the optimism that it will always be second best to pure human creativity seems like egoistic coping, which has been proven because looking at recent generations from Stable Diffusion, the hand issue is nearly solved.
I'm just extrapolating that kind of wishful thinking to anyone who thinks that AI will just be a tool for limited uses, always inferior to the less naïve human whose multi-modal efficiency could never be matched.
There's nothing against the laws of physics that prevent AI from becoming superhuman in all domains. There are people who nevertheless claim AGI is impossible because of handwavy reasons.
I just asked ChatGPT for something today; I framed it in terms of a question in an attempt not to influence it - is there a...?
And it told me firmly "yes" followed by fiction, with a plausible link that went nowhere. Honestly, I'm not entirely sure it was complete fiction, it was so plausible and what I wanted to hear. But I couldn't find it with Google.
What's bothering me is not just people saying this is like a human being, but people saying the next generation, the progression will bring us closer to a human mind. "Better", if it is more deceptive, will be worse.
Improving what it is already good at will produce something but not a human-like mind and not a superhuman oracle.
Humans have eyeballs, but an artificial eye is never going to be a human-like mind no matter how much you develop it and whether it is better or worse than a human eye.
This will be true for most AI "failures". People love to harp on some minor issue some new generative AI has in edge cases and act like it's some insurmountable obstacle that will keep humans superior forever: "haha look at the AI try to make a hand what an idiot"
As humans become more and more irrelevant, we will cling to these minor failures more and more to preserve our egos