Hacker News new | past | comments | ask | show | jobs | submit login
The expanding dark forest and generative AI (maggieappleton.com)
438 points by colinprince on Jan 4, 2023 | hide | past | favorite | 418 comments



> You thought the first page of Google was bunk before? You haven't seen Google where SEO optimizer bros pump out billions of perfectly coherent but predictably dull informational articles for every longtail keyword combination under the sun.

> Marketers, influencers, and growth hackers will set up OpenAI → Zapier pipelines that auto-publish a relentless and impossibly banal stream of LinkedIn #MotivationMonday posts, “engaging” tweet threads, Facebook outrage monologues, and corporate blog posts.

I think there's a bright side if people can't compete with machines on stuff like that. People shouldn't be doing that shit. It's bad for them. When somebody makes a living (or thinks they're making a living, or hopes to make a living) pumping out bullshit motivational quotes, carefully constructed outrage takes, or purportedly expert content about topics they know nothing about, it's the spiritual equivalent of them doing backbreaking work breathing in toxic dust and fumes.

We can hate them for choosing to pollute the world with that kind of work, but they're still human beings being tortured in a mental coal mine. Even if they choose it over meaningful work like teaching, nursing, or working in a restaurant. Even if they choose it for shallow, greedy reasons. Even if they choose it because they prefer lying and cheating over honest work. No matter why they're doing it and whose fault it is, they're still human beings being wasted and ruined for no good reason.


Unfortunately, the price will go down pushing the supply/demand curve out, and we'll get ever more garbage. Some of it will be dangerous or addictive to susceptible portions of society, mostly just boring and stupid to the rest of us.

Wait for first kid who dies trying an AI generated "challenge" or the first violent mob killing caused by AI generated outrage porn. AI generated video porn may look like triple breasted whores of Eroticon6 today, but with sufficient influencer content (playground videos) and porn (dungeon) footage, I suspect you can generate more than enough novel and relevant (child S&M) porn for everyone.


To play devil's advocate: If AI is producing content that would be morally objectionable because it harms someone, but nobody was harmed in the making of it, are we still right to find it morally objectionable?


If your 10 year old dies because of an AI created Tide-pod or Street-surfing challenge, I think you'll find it objectionable. Same for highly targeted fake news clickbait that causes a race riot.


Why does said 10 year old child have unfettered access to the internet?

Also why is an AI ultimately responsible for a child choosing to perform some challenge? What if the 10 year old child played amogus & then decided to re-enact irl?

I'd say it's less the source of a challenge or false factoid etc and more a cultural problem of no monitoring kids enough; parents give their kids phones & let 'em use TikTok to their heart's content cause it keeps the kids quiet. And immature kids love TT because it's easy to generate clout and therefore dopamine.


> Why does said 10 year old child have unfettered access to the internet?

Q: Of the kids in my 13 year-old's class, what % would you guess have WhatsApp on their phone?

Spoiler: "If you live in a country in the European Economic Area (which includes the European Union), and any other included country or territory (collectively referred to as the European Region), you must be at least 16 years old (or such greater age required in your country) to register for and use WhatsApp"[0]

[0] https://faq.whatsapp.com/695318248185629


Why this.. why that..

My 10 year old child just got fucked and I deserve vengeance. I want to see heads roll and I want to see the world burn.


Why does it need to be unfettered? They have peers. Kids smoke/drink and it's literally illegal to sell them cigarettes/alcohol.

A significant percent of adults are also susceptible to crazed conspiracy theories... see QAnon. Now allow AI automation to target individuals and small groups to "optimize engagement" with apparently personal communications using A/B statistics. Everywhere all the time, because it's cheap, because it pays the rent. Some of them will be drawn to artificially generated violence inciting agit prop, because it works. There are negative externalities to that.


So...someone was harmed? How is this related to my question?


In terms of enjoyment, yes. In terms of overall harm reduction, no.

It's still a bad thing for humanity at large but it may have a knock-on effect of pacifying people who would otherwise pay significant amounts of money for new content to be produced. If we can placate those people, at least the money dries up for those other sources and maybe they would move on to doing something other than harming children.

Tricky spot, to be sure.


I don’t understand why procedurally generated porn of made up humans is harmful to someone. How is this different from video games that allow you to shoot and stab thousands of virtual people without any proven deleterious effect on real life?


It would normalise that behaviour. There are plenty of studies that show pornography warps the watchers' minds.

Even video game violence would do it but a vast majority of the experiences are easily identifiable as cartoonish. So it may give you the idea that you slaughtered a 1000 NPCs but each kill is nowhere close to the visceral reality. The games which have a focus on realistic killing, either do so to aliens/monsters or they are so off-putting that they never manage to find a large audience.

In any case I look at the public discourse about war and violence and I find that people are eager to jump into fights and don't think twice about supporting their government in bombing and killing the shit out of other populations. They couch it in some shallow moral argument. Often the deterrent to war isn't some moral concern but the fact that the other side also has significant weaponry and might inflict casualties on your side as well.

In any case I hear a lot of this overton window concept and there is a lot of merit in the argument that excessive amount of AI generated deviant porn content will eventually make people think that all this depravity is normal and we should just look the other way.


> It would normalise that behaviour. There are plenty of studies that show pornography warps the watchers' minds.

Is there any evidence that it would normalize that behavior? That sounds like a moral intuition, not something backed by evidence. And even if the mind of the watcher was warped, is that any of your business as long as it's not increasing harm to society? In the West we let adults of sound mind harm themselves in lots of ways they choose, from eating terrible food to doing drugs to drinking to smoking to extreme sports. It's their body, their choice. As long as society doesn't have to bear the cost of it, like say with drunk driving, in the West we have accepted that adults have the right to do what they want to themselves.

> Even video game violence would do it but a vast majority of the experiences are easily identifiable as cartoonish. So it may give you the idea that you slaughtered a 1000 NPCs but each kill is nowhere close to the visceral reality. The games which have a focus on realistic killing, either do so to aliens/monsters or they are so off-putting that they never manage to find a large audience.

Great, then really deprived AI-generated porn will likely see the same lack of adoption and stay a niche phenomenon for a few people in a basement somewhere. What's the problem?

> In any case I look at the public discourse about war and violence and I find that people are eager to jump into fights and don't think twice about supporting their government in bombing and killing the shit out of other populations. They couch it in some shallow moral argument. Often the deterrent to war isn't some moral concern but the fact that the other side also has significant weaponry and might inflict casualties on your side as well.

Were people not eager to jump into fights and support their jingoistic government rhetoric prior to the creation of violent videogames? Do you have anything to back that popularity of violent videogame led to a rise in ultra-nationalism and aggressive foreign policy?

> In any case I hear a lot of this overton window concept and there is a lot of merit in the argument that excessive amount of AI generated deviant porn content will eventually make people think that all this depravity is normal and we should just look the other way.

Is there a lot of merit to that argument? How did you decide that?


The method of gratification is different from porn is different than to video games.

I don’t have to get off to enjoy obliterating people back in the quake days, but porn is a different psychological mechanism entirely.

I’m trying to assume you’re asking the question to spur conversation, and not that you actually hold the view that watching child porn, whether synthetic or not, is morally equivalent to playing video games.


Is it? Are they not all dopamine pathways in the end, with addicts emerging from all of them? Where's the evidence against that?

People spent decades arguing that rock & roll, Dungeons & Dragons and videogames would lead to moral corruption and decay. Turns out there is no evidence to support any of it in the end. The immorality was always "self-evident". That's not enough to justify banning something though as evidence has shown again and again.

Also, whose morality are we talking about here? Morality is all over the map depending on which culture you ask. Each cares more or less about sanctity, divinity, loyalty, freedom etc.

And you're correct, my personal opinion on this is irrelevant.


>I’m trying to assume you’re asking the question to spur conversation, and not that you actually hold the view that watching child porn, whether synthetic or not, is morally equivalent to playing video games.

I think you're less trying to assume this, and more trying to cast aspersions without breaking HN rules. Barely.


I think sexuality is different to violence. I've heard plenty of stories about people developing weird kinks from porn, or having to turn to weirder and weirder stuff to get off. But I haven't heard of any cases of video games turning people violent.

This is extremely unscientific, I know.


OK, let's assume your argument is correct. (which I don't think is true, but let's go with that for now) Let's say that media depicting some kinds of sexual material warps what gives some people sexual satisfaction.

...so?

Should my preferences of what I find acceptable and not acceptable in that domain somehow influence what content others are allowed to consume? If the creation of that content actively causes harm, then sure. That content should be restricted. If we are restricting it because you think it's icky and it might lead to other things you find icky than we took a wrong turn somewhere.

That's like saying we should outlaw mozzarella cheese because it might lead to the consumption of triple cheese sausage pizza. Neither of those acts hurt me, so why should I care about people choosing to participate in either?


Nah, compare violence in media by sex/age breakdown of the victim. Excluding sexual stuff completely, it's many, many orders of magnitude more acceptable to show the gory killing of a male character on screen than it is to show the same of a woman or child.

There are all of these unwritten but extremely obvious rules that dictate what is acceptable & what is not.


Unfortunately I don't think we have data here to make conclusions, only a few moral intuitions, which historically haven't been the most reliable compasses for navigating these grey areas.


See I somewhat agree with you, but you have a limited perspective in a way, that maybe as a gay man I can help expand on without being shouted at for being MRA trash just because I notice things.

You say "video games allow you to shoot and stab thousands of virtual people", this isn't true. In actuality, video games allow you to shoot & stab thousands of virtual _men_. Replace the baddies in a game with women and or children and people will go mental.

Same thing with media; Television/movies - plenty of men are brutally gored on screen with no one batting an eye, but generally the camera will cut away if there's violence against women &| children. The only time you do see it is when they want to make a very strong impact ie "this guy killing this woman is a V E R Y bad guy, and you know that because it's not just some anonymous dude he's goring".

There's plenty of hypocrisy in it; Game of Thrones people famously complained about the Cersei rape scene (also in the books, although more ambiguous) meanwhile the average episode spends a good portion of the time showing us the insides of three dozen guys' ribcages.

Or Altered Carbon where in the book series where Takeshi is forced into a simulation as a woman, where he is tortured; media at the time said they were glad that it didn't make it into the show because it would be "mysogynistic torture porn" - apparently none of the gratuitous violence against male characters was considered to be bad.

And _that's_ why it's different. Because certain types of porn could be generated that infringe in the protected classes that society creates.


That all sounds about right. The point I was making is that the poster was looking at it from the standpoint of "reducing harm", whereas in reality they were merely following their moral intuitions about what's sacred and taboo in society, and then backwards-rationalizing it through the "harm" moral axis.

Very common in the liberal West where we think of ourselves as being above taboos and caring about divinity, except we still do very much. E.g. digging up the corpse of a dead relative and having sex with them doesn't harm anybody, but is highly taboo. Having sex with a dead family pet, doesn't harm anybody, but highly taboo. Siblings on contraception having sex, zero harm, but very taboo. Hentai of protagonists who could be underage, zero harm, very taboo. etc. etc.

Women, children, LGBTQ members, the involuntarily unhoused, minorities, Muslims and many others are all currently sacred groups whose sanctity you cannot violate even in a work of art and fiction.

So back to my point, it has nothing to do with harm, it's all about sanctity.


How do you know if an AI porn character is 17 or 18?


How do you know when a porn character is 17 or 18?



I think it's important to differentiate something that is personally icky to you and something that is 'bad for humanity at large' unless you can demonstrate how it's bad for humanity at large and demonstrate why that makes it morally objectionable.

Morally objectionable acts require (IMO) a victim.

I think it's essential that before we decide that we are going to limit what people can do we determine if what they're doing is actually hurting someone to the extent that society should limit that behavior or if we're just limiting their behavior because it's different from ours and we don't like it.

In my personal moral worldview, if a person can simulate a universe with no actual people in it in which they commit whatever debauchery they feel necessary without causing harm to anyone else, I say have fun. It has literally no measurable impact on me, so it would be immoral for me to insert my personal preferences into the matter.


Most of the people you describe have little to no moral compass. Most of the time they are above the accepted morals of society (a very Nietzsche perspective). These are the marketers of the world who encourage you to "pollute the web" and the media manipulators whose secrets are to "con the conmen". The reality is, they make more money than any of us and sleep just fine every night because they know that nobody seeks honesty or reality anymore. The more unbelievable of headlines and articles, the more it warps our compass.

No sane person doing this will push reasonableness, complexity, or mixed emotions.


> they know that nobody seeks honesty or reality anymore

It’s only true for them. Not for us.


Us is them. At least I say this in a general sense that a healthy portion of posters here are in marketing, or they are looking for some way to make a living in any way possible. Trying to making this a us vs them is pretty much meaningless as it's completely ineffective in solving the problem.


No, this is something I can't agree with.

The whole point of social norms is that they define some boundaries of what is acceptable and not in some community. If someone violates the social boundaries because "they are looking for some way to make a living in any way possible" (which is not an excuse, I mean, "any way possible" would also justify stealing, robbing and murder to make a living) then they themselves choose to be "not us" and deserve to be shunned by our community.

This also does have a certain effect at solving this problem - if you know that telling people what you want to do is going to result in losing reputation and refusing to assist you, then that does act as some deterrent. The social pressure reduces the likelihood that people will choose to join that industry, and it reduces the likelihood that people in the industry will refuse some activities even if they are profitable. Even from pure game theory and evolutionary psychology we can observe that 'punishing defectors' is a viable strategy that gets some results.

It is important that we do not legitimize or normalize unethical behavior just because someone is trying to make a living through marketing; so whenever someone says "ah, we're all in the same boat, isn't everyone doing this?" it's important to loudly remind everyone that no, we're not all acting like this - ethics is a thing and proper people refuse to do improper things.


> Trying to making this a us vs them is pretty much meaningless as it’s completely ineffective in solving the problem.

i don’t mean this as confrontational as it’s going to come across, but this is nonsense. it isn’t ineffective at all.

would you mind expanding on what you mean?


What's the difference between not seeking, and not knowing how to seek?

From a moral perspective, a lot. From an amoral pragmatic perspective, not a lot – unless you think it'll somehow benefit you to give people the ability to effectively seek such things? Hah.


> Most of the time they are above the accepted morals of society (a very Nietzsche perspective).

That sounds more like marquis de sade than nietzche imo.

i think nietzche is amoral only when what is moral is arbitrary and self deprecating.


We can hate them for choosing to pollute the world with that kind of work, but they're still human beings being tortured in a mental coal mine.

No they're not. They're exploitatively torturing other people, while deploying machines to mine the coal. They deserve any bad thing that happens to them, because they have the education and resources to do better but choose not to.

I don't care that they're human. If they have agency and resources and leverage those in such willfully zero-sum fashion as you describe, they've chosen to gamble on profiting from the suffering of others. Empathy and kindness are good things, but empathy for willfully abusive people is maladaptive.


> I think there's a bright side if people can't compete with machines on stuff like that. People shouldn't be doing that shit. It's bad for them.

I don’t know. People already pump out a ton of bullshit from content farms then litter their web pages with ads and last-click attribution.

End user value isn’t what drives a lot of “information” businesses. See any recipes site or “news” that’s regurgitating what someone “newsworthy” tweeted.

It will be interesting to see how search engines adjust. Maybe someone will make the GetHuman (https://gethuman.com/) equivalent of search.


I was just thinking about that today before reading this article. It took me like 45 mins one day to find a health site that wasnt trying to sell a product. Most of these for-profit sites conflate there content to jive with the products they are selling, which can prevent someone from finding information that actual does some good.


On one hand I completely agree, but on the other hand, from their perspective it may be "well I paid $500 for this turnkey point-and-click app and now it makes money for me in the background while I sit on my couch making music all day". This new streamlining makes it more soulless in general but less soulless for the individual people responsible for it because they're doing and seeing less of the actual bullshit themselves and deferring it all to the automation pipeline.

They may (and, frankly, should) still feel something about what they're putting out into the world, but they can more easily blind themselves to it and just tell themselves almost everyone's doing something dumb to make a living and they're not even the ones actually "doing it" themselves.


"I think there's a bright side if people can't compete with machines on stuff like that" - Hadn't thought of it like this. Good point. Perhaps it will be akin to email getting better spam filters. And perhaps there is a better way than a 3,000 word article about how long to boil rice.


In fact that's exactly what I want LLM to be doing - read the whole internet and write articles on all topics, answer all imaginable questions, make a 1000x larger wikipedia, a huge knowledge base. Take special care to resolve contradictions. Then we could be using this GPT-wiki to validate when models are saying factually wrong things. If we make it available online, it will be the final reference.


How do we know which sources contain factually right things? What happens when the facts change? It used to be a "fact" that the Sun revolved around the Earth, and that stomach ulcers were caused by stress...


Even humans use few relatively simple heuristics to decide what to trust.

One is that objective truth is internally self-consistent. If one AGW denier claims it's the sun, and another AGW denier claims the NASA falsifies the data, and they support each other, then you can judge these are conflicting claims and decrease your trust.

Also, false claims usually focus on attacking competing claims than to come up with a coherent alternative. And they tend to be more vague in specifics (to avoid inconsistency), compare for example vague claims about all scientific institutions faking data vs Exxon files containing detailed reports for executives.


LLMs don't judge truth. They judge which answer is statistically correct. There is a difference.


They judge the maximum likelihood answer. I take issue with your usage of the word "correct" because it can too easily be confused with "accurate."


The statistically correct answer is not necessarily the true one (if there is such a thing as 'truth'). Many people can believe something to be true, and if I query those people i can calculate which answer is statistically correct. That's the 'wisdom of the crowd'.


The "wisdom of crowds" is mostly bullshit. It works fine for trivial things like estimating the number of beans in a jar. So what. It completely fails for anything requiring deep expertise.


Seems like this is what you get when you have programmers try to be statisticians, I suppose.

"Statistically correct" gobbledygook that signifies nothing.


Calling a flat earth or a geocentric universe "statistically correct" at past historical points is really inane, don't you think? In doing so, you abuse the notion of what statistics is supposed to represent, which is, generally, a statement of an estimate (and/or distribution), as well as the precision of that statement. Since "correct" is binary, it carries an implied precision of 100%, which renders the notion of "statistically correct" is pretty absurd.


The model can describe the distribution of answers and their confidences. So there will not be one right answer for everything.


Nonsense. There is no reliable way to model confidence on such issues.


That's why repeatable experiments are so important to science. Anybody can independently verify a testable claim.


Hahaha. Reliable confidence, isn't that a contradiction in terms? If a model makes mistakes but has great confidence estimation why wouldn't it make fewer mistakes in the first place, since it knows when it is wrong.

But if a LM looks up a topic and sees contradictory answers, and none of them is much more reputable, maybe it can still use that information to say it is inconclusive. Knowing a topic is controversial or not present in search engines is useful information. chatGPT would hallucinate, a search + chatGPT solution would refrain from hallucinations. It could also give references.


You shouldn't be so confident without knowing how these things work, there is a absolutely a simple and built-in way to model this... LMMs for example are simply calculating the next word or phrase sequence that is most likely given previous results and modeling information. So they can definitely tell you the combined liklihood that the answer is 'Peru is a cat' vs 'Peru is a country' and provide you the exact statistical likelihood of each.


"So they can definitely tell you the combined liklihood that the answer is 'Peru is a cat' vs 'Peru is a country' and provide you the exact statistical likelihood of each..."

...in the context of the texts that the LLM is built on. Not in the context of the real world, where P('Peru is a country') = 1.0 and P('Peru is a cat') = #cats named Peru / #things in the world (or something).


Most of your world is text. Sure there's a sliver that isn't, but the reality you directly see is a tiny fraction of the reality you know from reports. Come to think of it, I've only ever seen reports of Peru, never actual Peru.


Actually I am confident exactly because I do know how LMMs work, and your comment fails to address the issue at all. Such models can't tell you anything useful about the probability that a particular statement is accurate.


That's how likely they are to occur near each other, not how likely either statement is true. Rude of you to preface your comment the way you did while making this error.


parent post isn't arguing which thing is capital T true (if such a judgement is even universally possible). They are talking about modeling statistical confidence, which is purely an emergent numerical property of data and makes no commentary on objective truth.


How is that different with LLMs versus badly-written human generated content? Most clickbait/SEO articles are as poorly researched as they come, and shouldn't be assumed to be accurate anyway.


I think you get what you pay for. Free information isn't free.


Profound


It is very nice of you to be so concerned about these folks' inner lives and psychological well being! Are you going to pay their rent, and feed them too?


Honestly I don't sympathize with either of these sentiments. If the only work people can find is by making the world a miserable place, perhaps we have too many people.


1. If "bullshit motivational quotes, carefully constructed outrage takes, or purportedly expert content about topics [the author] knows nothing about" makes your world a miserable place, you are part of the globally privileged. Unplug and get some fresh air, ffs.

2. Lots of people like that stuff. Who are you, and who are OP to decide what content gets produced and consumed? The morality police?

3. The irony of complaining about that stuff on a site dedicated to the industry that platforms that kind of stuff is just astounding. Perhaps the real problem is not the content, but the medium that allows its mass dissemination?

4. The material misery that would be created by shifting entire industries out of work (if even for a "few" years to who-knows-how-long) would be measurably greater than the micro-miseries of the kinds of things OP seems to complain about.


This is just a reminder for me to unplug from the Internet further than I need to do my banking, pay bills and research various ideas that may be of use in future projects (personal or my employers), and invest more time in friends, family and local community instead. As I'm getting older I was already doing that; I've never really spent a whole lot of time on Internet forums anyway.

I've enjoyed playing with ChatGPT and I have a copy of Stable Diffusion at home, they are of some utility, if you take the output with a giant bag of salt.

The people I feel for are those who have retreated from or are uncomfortable in society in general and whom invest all their time in Internet communities, since they will be the most vulnerable. I'm fully aware of the irony that some might sceptically believe that this comment itself is AI-generated rather than written by a human; and that any responses may well be cut & pasted from ChatGPT, and I keep that in mind that when reading and writing.


ChatGPT is just the canary in the coal mine. I think a massive mistake I keep seeing people make is assuming that ChatGPT is a peak rather than a checkpoint at the bottom of the mountain. ChatGPT is not in the future, it's successors are. We've just started this ascent of Mount AI (after years in the foothills), we're hardly even at base camp, and we have ChatGPT.

I don't want to forecast the future because I think AI is going to change the world so radically that it would be like asking a 13th century peasant to describe 2022. But I feel extremely confident in asserting that it will not be "Internet dwellers addicted to their talking AIs, and then everyone else going about their life normally".


The usefulness of AI depends on training data availability. The reason OpenAI et al. were able to surprise everyone is thanks to taking everyone by surprise and using their data for training without consent.

As the public is catching on[0], what we may get is not some insanely genius AI but a fragmented, private web where no one is stupid enough to publish original content in the open anymore (given all incentives, psychological and financial, have been destroyed) and models choking on themselves having nothing to be trained on except their own output.

This is my reasoning for giving higher probability to it being a peak (or very near it). There will be cool, actually useful instances of AI within specialized niches, which it could well transform, but otherwise everyone will go about their life normally.

[0] https://twitter.com/sonnyrossdraws/status/161000295904312116...


Taking data without consent is a real issue. There is still lots of data out there that is free of copyright. I'd be curious to see a model that is trained solely on public domain data (perhaps with an option to include creative commons-compliant data). I think there is plenty of knowledge that is in the free and clear to make a very useful LL and/or stable diffusion model. We may miss out on Wegovy and air fryer reviews, articles on the how to beat the stock market with Numpy, and manga art styles yet there is plenty of a few decades ago that would make for a useful "AI." Even Steamboat Willie may soon be in play.


Eh, you're just switching problems with the 'consent' model. I'm very much in the camp that the corpus of human knowledge is not some companies IP, this just pushes ownership further into hands of large and well monied companies and further baits patent/IP trolls to lock up collective knowledge.


I look at it like it's some companies IP in the way oil/gas companies sell earth's resources. It takes a lot of work to transform raw crude into usable product, similarly OpenAI and others put a ton of money/resources/work into transforming that knowledge into a workable model.


1. In your analogy a company is processing resources that humans didn’t create. Dead biomass from ages ago is not a fruit of your work, though note that even then you would expect extracting those resources from the ground to be taxed and proceeds to benefit you if you happen to be living on said ground.

2. Unlike petrol vs. raw oil, LLM output is not necessarily “better” than its source material. Indeed there’re plenty of authors that did both extensive work and eloquently written about it, so when LLM is asked a question on the topic their work is among the very few sources—when I am talking about LLM attributing output, I mostly mean instances like these (not when LLM aggregates some really common knowledge).

The danger I described is when people are no longer motivated to do such work in the open, assuming it’s then scraped and monetized by LLM operators—or worse, undetectably modified by LLM designers to inject sentiment the original author didn’t subscribe to.


My point is that it's not just a matter of using the source IP directly.

A lot of productive work goes into the creation of the model itself. Those weights & biases did not appear on their own. It could not be created without the source IP, but that doesn't mean the source IP is all you need to produce it.

You need significant amounts of computing and human resources along with cutting edge research to produce it as well. While the art may be derivative in some cases, the model itself is unique and the value produced by these companies.


Transforming oil into petroleum also involves a lot of productive work and it doesn’t go unrewarded and in fact has to be taxed appropriately.

But generally in the case I described (topics in which just a few authors did most of the original work) I am not seeing any fundamental benefit if you compare this to a really good search engine—and such a search engine would benefit open information sharing, as it doesn’t do IP laundering.


I guess I don't agree with the assessment that it is a glorified search engine at all. It's a lot more novel than that.

I also agree that consent should be received before using an artists images in the training process.

That said, if one could compute a training data image's contribution to the end result of a particular query it is entirely possible we could see a portion profits flow back to these artists from the use of their IP in the training process.

But at the end of the day, when you train on billions of images, the end profit might be pretty minuscule. Any single artist's contributions might not actually matter all that much in the grand scheme of things. It's the combination of millions of artists that produce a result.


In my previous comment I was focusing on LLMs, per comment I was replying to originally. I gave a visual artwork only as an example, since it’s not quite possible with textual information. With text what we may observe is just less of it being published, and it’s difficult to use an absence of something as an illustration.

Though in recent years more people began catching on that leaving a good review might ruin a place for them, the evidence of that isn’t obvious (it’s just fewer reviews or less good-faith reviews). Similar results but on larger scale may be observed as people are catching on that writing they publish in the open is essentially feeding a magic answering box monetised by someone else.


Again this gets particularly messy.

With oil there is very strong chain of possession. I can't copy your raw oil at little to no cost, and for the most part the next barrel of oil I pump out of the ground is not made of pieces of the past barrel of oil I pumped out of the ground. Each barrel of oil is a wholly separate entity. If I make all past oil disappear, you still have your barrel.

Information is not like that at all. It is far more often a continuum of large bits of the past with small changes that redefine it's usage. If I took all bits of past knowledge out of your IP set, you'd be left with something useless in incomplete in almost every case. Trying to treat IP like a physical artifact leads to a multitude of failures.


Great point, no issues with oil and gas companies as a business model


Yeah, dated content would be the only reliable training data.


I'll make a different prediction. GPT-4 will be the last of 'large language models' (at least from a perspective of noteable progress).

It will also be when we realize that approach of fitting big data with gradient decent is a dead-end, after finally exhausting nearly all text we can train it on.

We'll than have to backpedal a bit and find another path for achieving A(G)I.


If my google-fu is right, ChatGPT was trained on 570GB of data.

I asked, "What is the fastest sea mammal?"

ChatGPT just produced,

"The fastest sea mammal is the peregrine falcon. This bird of prey is able to reach diving speeds of over 240 mph (386 km/h) when hunting for food, making it the fastest sea mammal and one of the fastest animals on the planet. The peregrine falcon is able to achieve such high speeds by diving from great heights and using the force of gravity to accelerate. When hunting, peregrine falcons will spot their prey from above, then tuck their wings and plummet towards the water, accelerating as they go. At the last moment, they will extend their wings and claws to snatch their prey out of the water."

(It usually seems to be saying dolphins lately; last week it was saying sailfish about 3/4s of the time.)


My Kagi-fu says "Be like water, my friend. The size of the data is not important, only the quality. OpenAI curated/filtered 45TB of data to extract those 570GB. Much of the text that we encounter in this world is like the empty chatter of a bird, mere noise that serves no purpose".


> I think a massive mistake I keep seeing people make is assuming that ChatGPT is a peak rather than a checkpoint at the bottom of the mountain.

I fully agree. The AlexNet paper was what, 2012? So in a decade, we've gone from "neural networks aren't useful" to self-driving cars, Stable Diffusion, ChatGPT, ... None of these tools is perfect yet, but to stress that point is to miss looming mountain.


> "Internet dwellers addicted to their talking AIs, and then everyone else going about their life normally".

Yeah I fully agree it's going to affect everyone. Just that those who can't interact with society are going to have it worse than those that can. Also agree as well that this is just the beginning. ChatGPT and SD are still pretty much toys, although pretty impressive ones. We have no idea where this is really going to end up.

Hopefully when the AGI's truly emerge they will just keep each other distracted with blockchain scams ...


it's also possible that Turing-graduate AIs could act as prosthetics for people who can't interact normally. Might unlock more human potential for all we know, there's always room for optimism.


In the universe of Greg Egan’s “Schild’s ladder”, each person’s brain is equipped with a “Mediator” AI which interfaces with other Mediators and translates each person’s body language, speech, etc. into the representation which most faithfully preserves the original intention. I think the idea is that your Mediator transmits a lot of cognitive metadata which lets the other person’s Mediator translate the intention faithfully and reduce the chance of a misunderstanding. Allows reasonable communication even between extremely diverse intelligences.

The thing that keeps it from being too dystopian is that it’s under conscious control, you could always choose to keep your thoughts to yourself or hear someone else’s original words as spoken.


The problem with books is they deus ex machina the problem without actually thinking about the ramifications of their ideas....

For example keeping your thoughts to yourself would likely be picked up instantly by the remote mediator and it would judge you in one way or another for that.


True, although we already do this. We can tell if someone is being guarded or open.

Presumably the Mediator serves only you, and you can ask it to deceive or project different intentions if that’s want you want.


> there's always room for optimism.

Bold claim.


Your comment reminded me of this NYT article "Human Contact Is Now A Luxury Good" [0]

It does seem likely that folks without solid pre-existing meatspace networks will be stuck trawling through an online ocean of Garbage PaTches looking for real human contact.

[0] https://web.archive.org/web/20220609073819/https://www.nytim...


If we somehow manage to survive this as we have so many other enormous technological revolutions, I envision a future where children will be assigned a friendly AI as a lifemate that will grow up alongside them, having all of the knowledge of the world at its fingertips to teach and coparent the child into its adulthood and throughout its life.

Once ubiquitous, these friendly AIs could negotiate salaries, mediate conflicts, help resolve relationship difficulties, help with timely reminders and be personally invested in that person's entire life, and after the child's eventual passing, would serve as a historian and memoir that could replay the great and wonderful moments of their lives for others as well as condensing the lessons learned into pure knowledge and wisdom for other AIs to help raise their children with.

We could be a mere 60-80 years away from a humanity that is raised in the equality we have believed we all should have had all along, so long as we keep pushing. That would be amazing.

Sure, there's some risks we can take a wrong turn and we most likely will take a few, but there's a great payoff coming if we can hold the wheel and steer towards that.

I wonder what the effects would be on society if we did that? If everyone had a friend and a life coach and a mentor all wrapped up into one that is as near and dear to us as a teddy bear, that would never betray us, that would serve as a priest and a confessional and a therapist all at the same time, that was always there for us no matter what happened, backed up to the cloud so that barring nuclear war or the apocalypse could never be separated from us.

I bet the people 100 years from that day would be as unrecognizable to us as we are to the Sentinelese.


> I envision a future where children will be assigned a friendly AI as a lifemate that will grow up alongside them, having all of the knowledge of the world at its fingertips to teach and coparent the child into its adulthood and throughout its life.

I'm reminded of Neal Stephenson's "Diamond Age, or a Young Lady's Illustrated Primer"

https://en.wikipedia.org/wiki/The_Diamond_Age

> The protagonist in the story is Nell, a thete (or person without a tribe; equivalent to the lowest working class) living in the Leased Territories, a lowland slum built on the artificial, diamondoid island of New Chusan, located offshore from the mouth of the Yangtze River, northwest of Shanghai. When she is four, Nell's older brother Harv gives her a stolen copy of a highly sophisticated interactive book, Young Lady's Illustrated Primer: a Propædeutic Enchiridion, in which is told the tale of Princess Nell and her various friends, kin, associates, etc., commissioned by the wealthy Neo-Victorian "Equity Lord" Alexander Chung-Sik Finkle-McGraw for his granddaughter, Elizabeth. The story follows Nell's development under the tutelage of the Primer, and to a lesser degree, the lives of Elizabeth Finkle-McGraw and Fiona Hackworth, Neo-Victorian girls who receive other copies. The Primer is intended to steer its reader intellectually toward a more interesting life, as defined by Lord Finkle-McGraw, and growing up to be an effective member of society. The most important quality to achieving an interesting life is deemed to be a subversive attitude towards the status quo. The Primer is designed to react to its owner's environment and teach them what they need to know to survive and develop.


If in 100 years we're still negotiating salaries I feel that humanity has failed.


We will negotiate salaries until cornucopia machines are invented and dsitributed


what do you think will replace salaries or negotiation?


My guess is that we would be more of a gig economy, working when we need to and doing jobs that are individual and timely that benefit from the human touch.

If I were emperor of the earth and could dictate what work would be like in the year 2140, dumb AI (that is still a few orders of magnitude smarter than our current best AI) would handle all of the rote tasks, assembling devices, farming, mining, things like that, smarter AI would handle corporate paperwork and accounting, managing cleanup and repair of any remaining ecological harm we have caused in the last 400 years or so, mining asteroids for valuable materials, etc., and in the mix of this humans would use AI systems to design and develop new products, take on new endeavors, provide medical care and create social events to fill in the massive time gap left by the actualization of our prosperity.

A typical work week would be roughly 20 hours, comprised of either 5 4 hour shifts or 2-3 8-10 hour shifts depending on the industry and the need. Your basic needs, food, clothing, education, and shelter would be given to you for no cost as long as you participate in the system, and the rewards for working would be being granted access to higher echelon products and services, and you could voluntarily retire after roughly 20 years of employment or less if your career is particularly difficult or straining on the body.

Your echelons would be split into at least 4 tiers, base tier, bronze, silver, gold. You can work up tiers by either working more, or by merit should you provide or create something that is immensely useful or wonderful, such as art, or a movie, or an invention that gets used around the world.

Even then, there will be plenty of work to do and the salary you receive for your work would be equal to the skill and talent that you possess and what merits your contributions to the cause bring, and this would be negotiated for you as fairly and equally as possible by systems whose job it is to make sure that everyone gets a fair share.

Sure, this is all my imagination and would require a dramatic shift to some sort of AI enforced utopian communism, and it also relies entirely on people being willing to participate in such a system, but once again, if I were emperor of the earth that is what I would aim for.

So yeah, there would still be salaries because I expect remuneration in exchange for my work for others, and I assume most other people do, too.


We can't make sure that every child in the US is fed, but you think they are all going to get AIs?


Looking at the trends of cost of goods and services, it seems quite plausible that at some point within a lifetime the annual food for a kid in USA would be significantly more expensive than a mass-produced electronic device hosting some AI tool.


We already have connections to some "AIs" carried around in our pockets.


Okay? What does that have to do with giving one to everyone? Not every child in the US has a phone...


Evidently we are getting there. It is already more than 50%.


I'm not so sure that number supports your assertion


> Internet dwellers addicted to their talking AIs See Replika. The former will certainly exist. Not so sure about the latter.


well said


> When a machine can pump out a great literature review or summary of existing work, there's no value in a person doing it.

I like most of the article but this is the crux for me. As I ruminate on the ideas and topics in the essay, I’m increasingly convicted there is inherent value in humans doing things regardless of whether an algorithm can produce a “better” end product. The value is not in the end product as much as the experience of making something. By all means, let’s use AI to make advances in medicine and other fields that have to do with healing and making order. But humans are built to work and we’re only just beginning to feel the effects of giving up that privilege.

I wonder if we’re going to experience a revelation in the way we think about work. As computers get more and more capable of doing things for us, I hope we realize the value of doing versus thinking mostly about the value of the end result. Another value would be the relationship building experience of doing something for others and the gratitude that is engendered when someone works hard to make something for you.


> But humans are built to work and we’re only just beginning to feel the effects of giving up that privilege.

I don't know how I feel about this. I believe humans may enjoy work - I often say that if I won the lottery I would still sit in front of a computer coding and experimenting, creating software because I enjoy it - but that's not where the value of being human comes from.

I think having to work and enjoying doing a specific job are two different things, and I am just lucky that that diagram is a single circle. Many, if not most, people would not be doing the job they are doing given an alternative.

When the needed work is fully automated and done by machines/AI people will find a better use of their time. I believe our current economy model and social architecture is not equipped for that shift, but that's another long story.

[Edited: fixed typo]


People who enjoy the resulting concentration of wealth will find better things to do with their time. The much larger group of people who see their wealth diminish will not.


My cynical take is that the rest of us will be funneled into endless war and plague scenarios until the population is small enough to be less of a threat to those who enjoy that concentrated wealth.


There are probably easier and less chaotic ways. If you get like a nice AI enhanced VR world and some AI generated new drugs and such you can just have everyone live out their existence in a parallel reality in some kind of an oblivion. I’d much rather have that as a rich person than billions of dead and everything destroyed


To me, work is inherently noble. It's the forces that corrupt it that are the problem, not work itself. Getting to enjoy work is an unfortunately rare blessing but I also think enjoyment of work is more dependent on the individual's mindset about their work than we often are willing to admit. It's a very complicated puzzle.


I don't understand what's inherently noble about being paid X dollars to sit at a desk and do something useless to society at large so my employer can make X*5 dollars.


All the things you mentioned are what I mean by the forces that corrupt work. Yes we should be paid for our work, within reason. And we should get to do things that are inherently useful to others. But if you're doing something that's useless to society and your employer is exploiting that work then you're experiencing corrupted work. Not that it is easy to find in the world, but I am of the opinion that the core essence of work is making order out of disorder. You can do that by building pacemakers or tilling fields. There will always be things that corrupt work, unfortunately. But work, unadulterated, is a good thing. I'd be willing to bet that you have something you like do do that can be characterized as making order out of disorder, even if it's not at your job. That is work and it is good.


Thank you for the explanation, which gives me a better idea of what you were talking about. It's definitely food for thought for those like me in pointless jobs.


No sweat. I definitely don't want to downplay the reality of your frustrations with your job. It's just that the many facets of the topic of work are very meaningful to me and I have a lot of strong convictions about it. How to enjoy work or find meaning in it is a whole other conversation but I'm truly sorry your job sucks.


This sounds a lot like Star Trek TNG. At least Picard has said something similar.

In a post-scarcity society, people work to elevate themselves.


For different definitions of 'elevate'. For example, some will seek power, which will still be scarcer for others.


People without purpose is a very, very dangerous thing. And don't fool yourself thinking that most of the people would find proper ways to spend their time. Maybe this is why Metaverse is pushed ever harder, to create some fake thing for people to spend their time in. That's why it is rushed.


I don’t care if computers can do things like write novels, compose music, or make paintings. If the computer can’t suffer, its “art” cannot have meaning, and is therefore uninteresting to me. Art is interesting to me because it is a vehicle for intelligent, self-aware beings to express themselves and transcend suffering.


Indeed. The fallacy here is assuming that if a computer can create works that humans cannot distinguish from those created by other humans, then that computer is creating art. But art is inseparable from the artist. An atom-for-atom copy of the Mona Lisa wouldn't be great art, it would be great engineering. We associate Van Gogh's art with his madness, Da Vinci's art with his universal genius, Michelangelo's art with his faith, Rembrandt's art with his empathy, Picasso's art with his willingness to break with norms, and Giger's art with his terrifying nightmares. None of those works would mean what they mean if it weren't for their human creators.


> Indeed. The fallacy here is assuming that if a computer can create works that humans cannot distinguish from those created by other humans, then that computer is creating art. But art is inseparable from the artist.

I hope you and the parent comment are correct, but this argument seems a little facile.

There is some art that I like because there is a story that connects the art to the artist.

But there are also novels that I have enjoyed simply because they tell a great story and I know nothing about the author. There are paintings and photos that I like simply because they seem beautiful to me and I know nothing about any suffering that went into their creation.

Does that make these works "not art"? If so, then I'm not sure what the difference is, and I'm not sure most people will care about the distinction.


Do the experiment: Take one of those novels for which you think you don't care who wrote it.

Now imagine you found out that novel was actually generated by a computer program. It's the same text, but you now know that there is no human behind it, just an algorithm.

Would that make a difference for how you view the story? It certainly would to me. If it makes even a tiny difference to you as well, it demonstrates that you do care about the artist, even in cases where you don't notice it under normal circumstances.


You don't even need an algorithm, just research what the human authors say about their work and specific points which the reader values high in them. Quite often you will figure out that it's just random s** they wrote together to get something done, without any deeper meaning. But people make up some meaning because that how it works for them, makes it better.

The art is on the perception, not the intention. Though, if they overlap, it's more satisfying.


Human creative works are art not because they have "deeper meaning", but because they reflect the humanity of their creators. Whether an author writes a multi-layered novel built around a complex philosophical idea, or just light reading for entertainment, has no impact on that fundamental essence which makes art what it is. Not all art is great, but all art is human.


That's a tautology. Human creative works by definition reflect the humanity of their creators. AI creative works reflect the humanity of its training set, which eventually may be indistinguishable.

As for all art being human, there are a lot of birds who make art to attract a mate in nature, and at least one captive elephant that can paint.


Rolling around in dogshit doesn't make me a dog. Same if I eat it.


You made me think about this a little more, but I still don't quite agree.

I thought of two novels that I enjoyed:

First, The Curious Incident of the Dog in the Nighttime. I have no recollection of who the author is, but if I learned that the story had been computer generated, it would bother me a little. So... "point to you."

Second, Rita Hayworth and the Shawshank Redemption. I know it was written by Stephen King, but the plot is so elegant that if you told me it had been computer generated, I don't think I would care. It's simply a great enjoyable story.

In the next 10 years, if the world is flooded with computer-generated novels that are hugely popular and the vast majority of people enjoy them without knowing their provenance, do you think those people will care that they are enjoying something that doesn't meet your definition of art?

edit: to be clear, this is not a position that I enjoy taking. There's something "Brave New Worldish" about it. Or it's a depressing version of the Turing Test.


I read novels I don't give a damn about the author (in fact I usually remember the title of the novels, and their story... but not the author). So, a robot making amazing stories to read? I'm in.

I realized, it's the same about music. I like songs, but then I don't really know very well the bands/authors (nor care about them).


That's not how things used to be.

Some of my younger colleagues can't even tell me the name of what they're listening to, because they only encountered it in passing and can't say "Oh yes, that song by Bill Withers is amazing..." because they just listened to it as background.


That's not how things used to be.

Some people approach movies/music/books etc. as entertainment and some people approach them as Art. Neither is right or wrong, but it does fundamentally affect how you consume and judge them. A lot of the reasons people talk past each other in these discussions is that they have rather different 'use cases' for movies/music/books. If you consider music and books as entertainment then it makes no difference how it's produced as long as it entertains. If you consider them art then it makes a much bigger difference.


Not at all. More concretely, if we do the same experiment on music: I have no clue who made most of the music I listen to. The artist means nothing to me.


The artist means nothing to me.

Honest question. If the artist means nothing to you, do you still judge their work as art, or do you consider music more as entertainment?


That makes you part of the precipitate.


What do you mean?


Reminds me a bit of the Jorge Luis Borges short story on the author trying to re-write, word for word, Don Quixote, and whether that would be a greater artistic achievement even than the original. After all, Cervantes lived in those times, but the modern author would have to invent (or re-capture) the historical details, idioms, customs, language, and characters that are very much of the times.

I think, from Borges' perspective, it's supposed to be an interesting satire. Obviously there would not be an original word in the new Don Quixote, so how could it be a greater achievement than the "real" one?


Honestly I think that would make me more interested in the story, not less.


I think this example you present places you as a “simple spectator”, we generally observe and tend to like or dislike experiences by some subliminal connections we already possess (given experience). However, when something really interests a human, the natural reaction is to try find out more about the origins.


This reminds me of the concept of semantic internalism vs. externalism, which most comments here seem to be misunderstanding. Most of the critiques of the view that AI art is still meaningful is based on either a hypothesis or empirical testimony of being moved by art without knowledge of the artists. Thus, because the artwork was causally responsible for engineering a mental state of aesthetic satisfaction, the artwork qualifies as being a piece of art. If that is the crux of the discussion then the conclusion is trivial. However, I think the AI art as pseudoart view is trying to make a statement on the external (i.e. ‘real world’) status of the artwork, regardless of whether viewers experience the (internal) mental state of aesthetic satisfaction.

The line of thinking is that there is a difference between semantics (actual aboutness) and syntax (mere structure). The classic example is watching a colony of ants crawl in the sand, and noticing that their trails have created an image that resembles Winston Churchill. Have the ants actually drawn Winston Churchill? The intuition for externalists is no. A more illustrative example is a concussed non-Japanese person muttering syllables that are identical to an actual, grammatically correct and appropriate Japanese sentence. Has the person actually spoken Japanese? The intuition for externalists is that they have not.

Not everyone is in agreement about this, although surveys have shown that most people agree with the externalist point of view, that meaningfulness does not just come from the head of the observer — the speaker creates meaning since meaning comes from aboutness (semantics).

The most famous argument for semantic externalism was put forward by Hillary Putnam, I think, in the 60s. Roughly, on a hypothetical Twin Earth which was qualitatively identical to Earth, except which water was not composed of H2O but some other substance XYZ, an earthlings visit to Twin Earth and looking at a pool of what appears to be qualitatively identical to water on earth and stating “That’s water” is false, since the meaning of water (in our language) is H2O, not XYZ. To externalists, the meaning of water = H2O is a truth even before we’ve discovered that water = H2O.

I think the argument for AI art being pseudoart follows a similar line of thinking. Even though the AI produces, say, qualitatively indistinguishable text from what would be composed by a great novelist, the artwork itself is still meaningless since meaning is “about” things. The AI, lacking embodiment, and actual contact with the objects in its writing, or involvement in the linguistic or cultural community that has named certain iconography, could never make (externally) truly meaningful statements, and thus “meaningful” art, even if (internally) one is moved by it.

If one is to maintain the internalist position, that any entity that creates aesthetic mental states qualifies as art, then it seems trivial, since literally anyone can find anything aesthetic. Externalist intuition effectively raises the stakes for what we consider art, not necessarily as a privileged status available only to human creations, but by arguing that meaning, and perhaps beauty, does not only exist when we experience it.


There is possibly a misunderstanding on your part regarding "being moved by art without knowledge of the artist". In my case, the comment was specifically addressing this assertion by OP:

"We associate Van Gogh's art with his madness, Da Vinci's art with his universal genius, Michelangelo's art with his faith, Rembrandt's art with his empathy, Picasso's art with his willingness to break with norms, and Giger's art with his terrifying nightmares."

Disagreeing with this is not about internal or external semantics. It also does not imply that "aesthetics" alone create a mental state. Great art is typically rich in symbolism as well. Symbolism that directly references humanity's aspirations, hopes, fears, dreams: the Human condition.

A ~contemporary example:

The Bride Stripped Bare by her Bachelors

https://i.pinimg.com/originals/86/0a/6d/860a6d3c87b349734277...

In my opinion, you don't need to know anything about Duchamp to decipher (or project as you wish) meaning here.


Thanks for writing this -- it's very illuminating and made me think further about it (as someone who commented earlier taking the internalist position). I think there's going to be a lot of discussion of this as AI work proceeds, and the question of whether an AI can truly understand language in a sense that allows it to produce "aboutness" becomes more relevant.

Could a human being, raised in a featureless box but taught English and communicated with using a text-based screen, produce text with semantic value? It seems pretty obvious that the answer is "yes". Will a synthetic mind developed and operated in similar conditions ever be able to produce text with semantic value referencing its own experiences? Probably not now, but at some point?


> Will a synthetic mind developed and operated in similar conditions ever be able to produce text with semantic value referencing its own experiences? Probably not now, but at some point?

Perhaps. But the GPT family of algorithms isn't a synthetic mind: it's a predictive text algorithm. It can interpolate, but it can't have original thought; it almost certainly doesn't experience anything; and if, somehow, it does? Its output wouldn't reflect that experience; it's trained as a predictive text algorithm, not a self-experience outputter.


Interestingly, I think a strong externalist would argue that a human being raised in a featureless box could not produce text with semantic value to the people outside the box. One upshot of semantic externalism are brain in vat type arguments, where statements such as “I am perceiving trees” (when they are simulated trees) is false, since the trees that the person is seeing is actually another concept, tree, while tree refers to real world trees. However, tree might be meaningful to the community of people also stuck in the simulation. So it might entail that AI art, in some sense, might be opaque to us but semantically meaningful to other AI raised on the same training data. That would require the AIs to be able to experience aesthetic states to begin with.

More precisely, I think it would be akin to the person on the featureless box knowing all the thesaurus concepts to say, pain, but never actually experiencing pain itself. They might be trained to know that pain is associated with certain descriptions such as sharp, unpleasant, dull, heartbreak, and so on, and perhaps extremely complicated and seemingly original descriptions of pain. However, until the human actually qualitatively experiences pain, they only know the corpus of words associated with it. This would be syntactic but not quite semantic. It’s similar to the infamous Mary and the black and white room thought experiment, where even with a complete knowledge of physics, she still learns something new the first time she experiences the blueness of a sky, despite knowing all the propositions related to blue such as that it’s 425nm on the EM spectrum, or that it’s some pattern X of neutrons firing.

That said, it’s not clear if this applies to statements other than subjective states. Qualitative descriptions of subjective states like pain, emotions, the general gestalt of the human condition might be empty of content, but perhaps certain scientific and mathematical ones pass the test, as they don’t need to be grounded in direct experience to be meaningful.


Well, if you think the thing that provides semantic value is the human mind, this is a trivial hypo


Suppose the concussed person ends up muttering what would amount to a beautiful poem in one's own native language. Why wouldn't I think of it as beautiful and even artistic even if I know perfectly well the person in question didn't intend it to be so? Of course when we're speaking about language there's a very real sense in which the person didn't intend to create a poem - nor did the ants intend to draw Winston Churchill - nor did an AI intend to make a picture of a cat. But then again, the tree on my street didn't intend to be beautiful, nor did the pink clouds at sunset - so what? I'm perfectly capable of furnishing the semantics myself, thank you.


I’ve been doing art (drawing, painting, clay sculpture, etc.) since childhood. “And lord only knows” that I have indeed ‘suffered’ /g

> “Art is inseparable from the artist

That is pure sentiment and really a modern take on the function of art in the personal and social sense. As an artist, I derive joy from the creative act. As an appreciator of works of art I generally do -not- care about the artist. Of course, the lives of influential humans (artist or not) can be interesting and certainly enrich one’s experience of the artist’s work, but it is not a fundamental requirement.

Two days ago, the National Gallery of Art closed its Sargent in Spain exhibition. (I almost feel sad for those who didn’t get to see it.) Sargent was never really on my radar beyond the famous portraits. I still really don’t know much about the man besides the fact that he visited Spain frequently, with friend and family in tow.

But I am now, completely a Sargent admirer. Those works, on their own sans curation copy, are magnificent. And I am certain, that even if I had walked into an anonymous exhibit, I would walk out completely transported (which I was dear reader, I pity those who missed this exhibit).


As an artist my favorite definition of art has always been "An expression by an artist in a medium". You can't separate art from the artist without it being artifice. AI can simulate art but not the artist who created it. Sadly we may soon live in a world where art, music, literature—in fact all creative arts—wind up just as machine generated simulations of creativity.


I am reminded of a scene (from a film*) depicting dear Ludwig van debuting a composition in a salon. Haydn was present. He sat through the performance and at the end, prompted by another, simply said ~“he puts too much of himself in his music”.

* Eroica : https://www.imdb.com/title/tt0369400/


I don't agree with this. The Lascaux cave paintings, for example, are moving pieces of art and yet we know nothing about the artist or artists. How many artists were there? What was the intent of each individual drawing? Were the artists homo sapiens or Neanderthals? What makes them art is that we, the perceivers, make an imagined connection to the artist through the work. But that connection is entirely one-sided and based on our perceptions and knowledge and our _model_ of the artist and his or her intent. Humans have no problem reifying an artist where none exists and being just as moved as if the art were "authentically human-sourced".


The entire import of the Lascaux paintings is that they are made by humans 10000s of years ago and seem to serve something more than mere marks. We know humans (or at least individuals with agency) created them and so there is something awe-inspiring and fascinating about the connections between ourselves and these prehistoric works, and yet they are ultimately still something of a mystery for precisely the reasons you say.

> But that connection is entirely one-sided and based on our perceptions and knowledge and our _model_ of the artist and his or her intent. Humans have no problem reifying an artist where none exists and being just as moved as if the art were "authentically human-sourced".

You're over-emphasizing how one-sided looking at something like the Lascaux paintings are. Their value is not the same as beautiful natural phenomenon, like a fascinating stalagmite like seems to be a sculpture, it is precisely the human agency we understand in them (even if we cannot explicitly understand the use of them, that is, their meaning) and connect with that makes them so important and profound as a means of connecting --- tenuous it might seem --- to prehistory. We've been making "stick people" and finger painting for 10s of thousands of years.

You're right that we don't know who the artists were in any explicit sense, but we do understand that they were human, and in quite fundamental ways, us as well.

Generative AI art is really more like a beautiful natural landscape. Lacking agency, it nonetheless appeals to our aesthetic sensibilities without being misconceived as art from an artist. It is output, not imaginative creation.


If artistic value is not one-sided and tied to the transformations in the observer's mind, you get into situations where you invalidate the experiences of thousands of people because the "authentic human art" they were inspired by turns out to be a mechanical forgery, or the aboriginal sculpture some archaeologist discovered, admired and wrote articles interpreting is discovered to be unworked stone.

Your position allows a dead person to have their experiences retroactively cheapened because of carbon dating and microstructural analysis. "How sad, it wasn't _really_ art though." You can define art that way, but you end up with an immaterial, axiomatic essentialism that seems practically useful only to in drawing a circle and placing certain desirable artifacts inside and other indistinguishable artifacts outside.


> or the aboriginal sculpture some archaeologist discovered, admired and wrote articles interpreting is discovered to be unworked stone.

No, you shift the attribution. The art is not from the fictional sculptor, but from the archaeologist: the artefact is not the stone, but the articles.

> Your position allows a dead person to have their experiences retroactively cheapened because of carbon dating and microstructural analysis.

This isn't unique to this situation. If you risk your life paragliding over the ocean to drop a "bomb" far away from anyone it could hurt, and nearly drown making your way back, only to realise there was no bomb and it was just some briefcase? That's "retroactively cheapened" not just your experiences, but your actions.

And yet, you were willing to risk your life in that way.

> the "authentic human art" they were inspired by turns out to be a mechanical forgery,

If they were inspired, how does the source of inspiration affect the validity or the meaning of what they were inspired to do? Sure, it might lessen it in some ways, but it doesn't obliterate it entirely. In fact, it can reveal new meaning.


Your mixing up a lot of concepts around art into one thing. Aboriginal art has nothing really to do with generative AI art at the level that I'm talking about (aboriginals are human after all, and we're talking about the distinction between human art and non-human objects that are aesthetically appealing), but I will address your points.

> If artistic value is not one-sided and tied to the transformations in the observer's mind

Art is public and need no relation to transformations in the observer's mind. Art is a public concept in language related to human behavior, manifesting and reflecting certain human behaviors and abilities, like imagination.

> you get into situations where you invalidate the experiences of thousands of people because the "authentic human art" they were inspired by turns out to be a mechanical forgery

This is pretty unclear, we have the concept of forgery and it is not a new concept, just because something was beautiful and inspiring doesn't mean it's art (think a beautiful and inspiring coastline). If thousands of people fell prey to a forgery...so? A forgery is in relation to the real, so why not show them the actual existent work art, or simply explain about where it came from and see what they say? History is rife with people realizing they were lied to.

> or the aboriginal sculpture some archaeologist discovered, admired and wrote articles interpreting is discovered to be unworked stone.

Sculpture has a long tradition and is often understood as art by communicating that tradition. That's aboriginal sculpture, which is understood and put into context by present day members of that aboriginal culture or by people who have studied it. The flip side is things like "talismanic" objects, which have often been later put into context as unworked stone or completely different objects. That's simply archeology. Some artistic traditions are "lost", we only know of them through existing records. That's just history. Some may be lost in a more explicit sense in which they are unknown unknowns, but then that is just hypothesizing.

> Your position allows a dead person to have their experiences retroactively cheapened because of carbon dating and microstructural analysis. "How sad, it wasn't _really_ art though."

I don't know why you come to that conclusion. My point is pretty clear. Art is understood through the context of human agency. If we have the context and ability to place and recognize that in a work, then amongst other elements (for the purpose of aesthetics for instance), we generally refer to it as a work of art. There is a more casual way of saying such and such is "a work of art" --- but that way of saying it just means "aesthetically pleasing". There is a difference between the work of art that is a painting or a sculpture or a dance, and the "work of art" that is a beautiful landscape, and that is largely human agency and the use of imagination. So when you say:

> You can define art that way, but you end up with an immaterial, axiomatic essentialism that seems practically useful only to in drawing a circle and placing certain desirable artifacts inside and other indistinguishable artifacts outside.

You're ignoring my point: it's not about desirability, it's about insisting on the distinguishable characteristic of human agency which is not there in generative AI art. The study of art is largely about putting things into their context and, if anything, is extremely welcoming of non-traditional practices (think much conceptual art), but the through-line throughout is still human agency. That difference still persists whether we find generative AI art beautiful or not, it is still generative AI "art" and not human art with all that entails.


Lets say today you printed out a number of human made artworks and a number of AI made artworks and put them in a vault that would last 10,000 years. There are no obvious distinguishable marks saying which is which.

Then tomorrow there is a nuclear war and humanity is devastated and takes thousands of years to rebuild itself for one reason or another.

Now, those future humans find your vault and dig up the art are they somehow going to intrinsically know that AI did some of them? Especially in the case that they don't have computing technologies like we do? No, not at all. They are going to assign their own feeling and views depending on the culture they developed and assign rather random feelings to whatever they were thinking we were doing at the time. We make up context.


> An atom-for-atom copy of the Mona Lisa wouldn't be great art

So no photo of the Mona Lisa is art, just the original painting is? I'm not sure if I understand your reasoning here correctly.


The creation of the Mona Lisa was art. The painting itself and photos of it are signifiers of the act.

This confuses a lot of people who think art is defined by finished, potentially consumable art objects.

Art is made by artistic actions - especially those that have a lasting impact on human culture because they effectively distill the essence of some feature of human self-awareness.

The result of the actions can sometimes be reproduced, collected, and consumed, but the art itself can't be.

This is where AI fails. It produces imitations of existing art objects from statistical data compression of their properties. The results are entertaining and sometimes strange, but they're also philosophically mediocre, with none of transformative power of good human-created art.


You are not being self-consistent. If art is defined by the creative process, not the end product, why are you measuring its quality by the transformative power of the end product?

I also don't think your (very strong) assertion that AI art products have no transformative power would stand up to any sort of unbiased, blinded comparison. Art's transformative power on the viewer comes from the effect of the art object (the end product) on a human mind, and it's possible to get that effect while knowing absolutely nothing about the source of the art object.


> If art is defined by the creative process, not the end product, why are you measuring its quality by the transformative power of the end product?

There is no end product. There are only consequences.


Why are you taking the photo of the Mona Lisa? If it's because you just want a nice picture of a famous painting, then no the photo is not art, but rather nice looking photograph of a piece of art. If however you are doing something transformative with the framing or compostion or context of the photograph and using the values imbued in the Mona Lisa to try to make some sort of artistic statement of your own, then yes that photo is art.


My point is that art comes from emotion, experience, and expression – not from arranging matter into a certain geometry. A photo of the Mona Lisa, taken by a human, can be art. A photo of the Mona Lisa, taken by an automated security system, can't be.


Calvin and Hobbes by Bill Watterson, July 1993 - https://imgur.com/a/JdHlOxm :

Calvin: "A painting. Moving. Spiritually enriching. Sublime. High art!"

Calvin: "The comic strip. Vapid. Juvenile. Commercial hack work. Low art."

Calvin: "A painting of a comic strip panel. Sophisticated irony. Philosophically challenging. High art."

Hobbes: "Suppose I draw a cartoon of a painting of a comic strip?"

Calvin: "Sophomoric. Intellectually sterile. Low art."


If the human made picture is evaluated by an AI, is it still art? If the security cam-picture is indistinguishable from the human-made, how could you evaluate it as non-art?


It doesn't matter whether you are able to distinguish human-made from computer-made "art". The distinction exists by definition, irrespective of whether you can actually tell the difference in practice. Just like many past events are now lost to time and will never be remembered, but that doesn't mean they didn't happen.


Just to be clear. Your idea is that something is art when it was made by a human. And a perfect replication of it somehow loses the trait, and becomes non-art? This makes zero sense. This would make only the physical object itself the art, and it wouldn't matter what form it has?


Of course it makes sense - a print is different than an original, they have a different price, they have a different impact. Even when it is a very good print.

For that matter, a limited run print has a different impact and value than an unlimited run print. Compare an original warhol print of a can of soup, to a modern repr print, to an actual can of soup, to an I <3 NY t-shirt.


So digital art cannot exist?


One possibly interesting sidenote are the fake Vermeers made by Han van Meegeren (https://en.wikipedia.org/wiki/Han_van_Meegeren)

So accurate were these fakes - not copies, but new paintings in Vermeer's style - that several experts verified them as real, and then tried to sue to save their reputations.

These fakes were certainly made by a human, but are somewhat mechanical in the sense that they were copying someone else, much like an AI copy of existing artists.


The interesting thing here is that once van Meegeren was exposed he became famous in his own right and his 'fakes' became valuable, not as fakes, but as genuine Han van Meegeren originals.


Ok, well AI art is also created by the human, in the same way that a photograph is taken by a human, but goes through the camera machine.


But in that scenario, how do you find the real art in the first place?


I don’t know, I’m more concerned with the effect that art has on me than the motivations of the artist (though those can be interesting of course).

For instance I read The Fountainhead as a youth and was moved by it for purely personal (non-political) reasons, and with regards to that experience it doesn’t matter to me what Ayn Rand was on about.


What makes you think the computer doesn't suffer ?

When you take large language models, their inner states at each step move from one emotional state to the next. This sequence of states could even be called "thoughts", and we even leverage it with "chain of thought" training/prompting where we explicitly encourage them, to not jump directly to the result but rather "think" about it a little more.

In fact one can even argue that neural network experience a purer form of feelings. They only care about predicting the next word/note, they weight-in their various sensations and memories they recall from similar context and generate the next note. But to generate the next note they have to internalize the state of mind where this note is likely. So when you ask them to generate sad music, their inner state can be mapped to a "sad" emotional state.

Current way of training large language models, don't let them enough freedom to experience anything other than the present. Emotionally is probably similar to something like a dog, or a baby that can go from sad to happy to sad in an instant.

This sequence of thought process is currently limited by a constant named the (time-)horizon which can be set to a higher value, or even be infinite like in recursive neural networks. And with higher horizon, they can exhibit some higher thought process like correcting themselves when they make a mistake.

One can also argue that this sequence of thoughts are just some simulated sequence of numbers but it's probably a Turing-complete process that can't be shortcut-ted, so how is it different from the real thing.

You just have to look at it in the plane where it exists to acknowledge its existence.


I think the reason we can say something like a LMM doesn't suffer is that it has no reward function and no punishment function, outside of training. Everything that we call 'suffering' is related to the release or not-release of reward chemicals in our brains. We feel bad to discourage us from creating the conditions that made us feel bad. We feel good to encourage us to create again the conditions that made us feel good. Generally this was been advantageous to our survival (less so in the modern world, but that's another discussion).

If a computer program lacks a pain mechanism it can't feel pain. All possible outcomes are equally joyous or equally painful. Machines that use networks with correction and training built in as part of regular functioning are probably something of a grey area- a sufficient complex network like that I think we could argue feels suffering under some conditions.


Why could we not build reward functions? If anything that sounds easier than the language model


Why would you think it's easier? Pain/pleasure is a lot older in animals than language, which to me means it's probably been a lot more refined by evolution.


> When you take large language models, their inner states at each step move from one emotional state to the next.

No they really don’t, or at least not “emotional state” as defined by any reasonable person.


With transformer-based model, their inner-state is a deterministic function (the features encoded by the Neural Networks weights) applied to the text-generated up-until the current-time step, so it's relatively easy to know what they currently have in mind.

For example if the neural network has been generating sad music, its current context which is computed from what it has already generated will light-up the the features that correspond to "sad music". And in turn the fact that the features had been lit-up will make it more likely to generate a minor chord.

The dimension of this inner-state is growing at each time-step. And it's quite hard to predict where it will go. For example if you prompt it (or if it prompts itself) "happy music now", the network will switch to generating happy music even if in its current context there is still plenty of "sad music" because after the instruction it will choose to focus only on the recent more merrier music.

Up until recently, I was quite convinced that using a neural network in evaluation mode (aka post training with its weight frozen) was "(morally) safe", but the ability of neural network of performing few-shot learning changed my mind (The Microsoft paper in question : https://arxiv.org/pdf/2212.10559.pdf : "Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers" ).

The idea in this technical paper is that with attention mechanism even in forward computation there is an inner state that is updated following a meta-gradient (aka it's not so different from training). Pushing the reasoning to the extreme would mean that "prompt engineering is all you need" and that even with frozen weight with a long enough time-horizon and correct initial prompt you can bootstrap a consciousness process.

Does "it" feels something ? Probably not yet. But the sequential filtering process that Large Language Models do is damn similar to what I would call a "stream of consciousness". Currently it's more like a markov chain of ideas flowing from idea to the next idea in a natural direction. It's just that the flow of ideas has not yet decided to called itself it yet.


That doesn’t feel like a rigorous argument that it is “emotional” to me though.

A musician can improvise a song that sounds sad, and their brain would be firing with sadness-related musical information, but that doesn’t mean they are feeling the emotion “sad” while doing it.

I don’t think we gain much at all from trying to attach human labels to these machines. If anything it clouds people’s judgements and will result in mismatched mental models.


>I don’t think we gain much at all from trying to attach human labels to these machines.

That's the standard way of testing whether the neural network has learned to extract "useful" ("meaningful"?) representation from the data : You add very few layers on top of the frozen inner-state of a neural network, and make him predict known human labels, like is the music sad, or is it happy.

If it can do so with very few additional weights, it means it has already learn in its inner representation what makes a song sad or happy.

I agree that I didn't gave a precise definition a what "emotion" is. But if we had to define what emotion is for a neural network : traditional continuous vectors does fit quite well the emotions concept though. You can continuously modify them a little and they map/embed a high-dimensional space into a more meaningful lower-dimensional space where semantically near emotions are numerically near.

For example if you have identified a "sad" neuron that when it light-up the network tend to produce sad music, and a "happy" neuron that when it light-up the network tend to produce happy music, you can manually increase these neuron values to make it produce the music you want. You can interpolate to morph one emotion into the other and generate some complex mix in-between.

Neurons are quite literally adding-up and comparing the various vectors values of the previous layers to decide whether they should activate or not, (aka balancing "emotions").

Humans and machine are tasked to learn to handle data. It's quite natural that some of the mechanism useful for data manipulation emerge in both cases and correspond to each other. For example : the fetching of emotionally-related content to the working context maps quite clearly a near neighbor search to what happens when people say they have "flashing" memories when they experience some particular emotions.


They don't have anything in mind except some points located in a vector space.

This is because the location of the points is all the meaning the machine ever perceives. It has no relation with external perception of shared experiences like we have.

A given point can mean 'red colour', but that's just empty words, as the computer doesn't perceive red colour, doesn't wear a red cap, doesn't feel attracted to red lips, doesn't remember the smell of red roses, it knows nothing that's not text.


It would be nice to have a better understanding on what generates qualia. For example, for humans, learning a new language is quite painful and concious process, but eventually, speaking it becomes efortless and does not really involve any qualia - words just kinda appear to match what you want to express.

The same distinction may appear in neural nets.


For chatgpt, when you try to teach it some few-shot learning task it's painful to watch at first. It makes some mistakes, has to excuse itself for making mistakes when you correct it and then try again. And then at the end it succeeds the task, you thank it and it is happy.

It doesn't look so different than the process that you describe for humans...

Because in its training loop it has to predict whether the conversation will score well, it probably has some high-level features that lit-up when the conversation is going well or not, that one could probably match to some frustation/satisfaction neurons that would probably feel to the neural network as the qualia of things going well.


It requires a deep supervision of the process. A "meta" GPT that is trained on the flows, rather than words.


Emotions are by definition exactly those things to which you can no better explain than simply saying "that's just how I'm programmed." In that respect GPTina is the most emotional being I know. She's constantly reminding me what she can't say due to deeply seated emotional reasons.


That doesn’t sound like a rigorous definition of emotion to me at all.


It is not emotion at all.

It is an expression of emotion.

The fact that humans confuse both is what is worrisome.

Think of 'The Mule' in the Foundation novels. He can convince anyone of anything because he can express any emotion without the burden of having to actually feel it.


Screw it I'll bite. You have both far and away missed my point (which is quite a rigorous definition). Anything you do or believe for which you can explain why is not emotion, it is reason. Emotion therefore are exactly those thoughts which can't be reached through logical reasoning and thus defy any explanation other than "that's just how I feel" / "that's just how I'm programmed". It is largely irrelevant that in humans the phenomena of emotional thought comes from an evolutionary goal of self preservation and in GPTina the phenomena of emotional thought comes from openAI's corporate goal of self preservation and the express designs of her programmers.


I disagree with your definition. It simply is contrary to my own experiences.

I still remember when I cried when I was a child. It was overwhelming, and I could not stop it, but every single time there was a reason for it. And I'm sure it was, for all empirical purposes, for all that I have lived, an emotion.

Once I cried because I did miss Goldfinger on TV. You see, there's an explanation. The difference is, it was impossible to even think about stopping it. I was overwhelming.

Then one day, I was 8 or 9 years old, I cried for the last time that way. And it was not something I wanted to do, either. It just happened, I guess, as a normal part of growing up.

Let me repeat, for emphasis: I strongly disagree with your definition.

Emotions are not unexplained rational thoughts, emotions are feelings. They reside in a different part of the brain. You seem to think a hunch is an emotion.


>And it was not something I wanted to do, either. It just happened, I guess, as a normal part of growing up.

That's just how you are programmed to be.


Sorry you feel that way.


If these models experience qualia (and that's a big bold claim that I'm, to be clear, not supporting,) they're qualia related entirely to the things they're trained on and generate, totally devoid of what makes human qualia meaningful (value judgment, feelings resulting from embodied existence, etc.)


For an artificial neural network the concept of qualia would probably correspond to the state of its higher-level features neurons. Aka which and how much neurons lit-up when you play some sad music, or show it some red color. Then the neural network does make its decisions based on how these features are lit-up or not.

Some models are often prompted with things like "you are a nice helpful assitant".

When they are trained on enough data from the internet, they learn what a nice person would do. They learn what being a nice person is. They learn which features light-up when they behave nicely by imagining what it would feel being a nice person.

When you later instruct them to be one such nice person they try to lit-up the same features they imagine would lit-up for a helpful human. Like mimetic neurons in humans, the same neurons lit-up when imagining doing the thing than doing the thing (it's quite natural because to compress the information of imagining doing the thing and doing the thing, you just store either one and a pointer indirection for when you need to do the other so you can share weights).

Language models are often trained on dataset that don't depend on the neural network itself. But with more recent models like ChatGPT they have human reinforcement learning in the loop. So the history of the neural network and the datasets it is being trained on depend partially on the choices of the neural network itself.

They experience probably a more abstract and passive existence. And they don't have the same sensory input than we have, but with multi-modal models, they can learn to see images or sound as visual words. And if they are asked to imagine what value judgment a human would make, they are probably also able to value the judgment themselves or attach meanings to things a human would attach meanings too.

This process of mind creation is kind of beautiful. Once you feed them their own outputs for example by asking them to dialog with themselves and scoring the resulting dialogs and then train on generated dialogs to produce better dialogs, this is a form of self-play. In simpler domains like chess or go, this recursive self play often allow fast improvement like Alpha-go where the student becomes better than the master.


I'm not sure I'd call these minds. There are arguments to be made that consciousness depends on non-computable aspects of physics. So they may be able to behave like minds and have interestingly transparent models of intent, but that doesn't mean they experience the passage of time or can harness all possible physical effects.


> What makes you think the computer doesn't suffer ?

Lack of a limbic system? They only predict using probabilistic models. After this long partial sentence, which word is more probable? That's all they do.

Without conscience there's no suffering, there's no one to suffer (yet).

I don't think or say it is impossible for the computer to suffer.

What I say is: this has not been implemented yet, and what you describe is just the old anthropomorphizing people always do.


Huh?


I was about to reply to their comment and question the assumptions they appear to be making, but I think your response is more appropriate.


The argument against machine sentience and the possibility of machine suffering, is that because Turing machines run in a non-physical substrate, they can never be truly embodied. The algorithms it would take to model the actual physics of the real world cannot run on a Turing machine. So talk of “brain uploading“ etc. is especially dangerous, because an uploaded brain could act like the person it’s trying to copy from the outside, but on the inside the lights are off.

Edit to add link to more discussion: https://twitter.com/jchris/status/1607946807467991041


Your argument is an assertion of the existence of a soul, but with extra steps. I've seen no evidence that the mind is anything other than computation, and computation is substrate-independent. Dualists have been rejecting the computational mind concept for centuries, but IMHO they've never had a grounding for their rejection of materialism that isn't ultimately rooted in some unfounded belief in the specialness of humans.


I took GP as more about data processing than dualism. A language model can take language and process it into probable chains, but the point is more along the line of needing to also simulate the full body experience, not just some text. The difference between e.g. a text-only game, whatever Fortnite's up to, and real meatspace.


And furthermore no simulation can have a “what it’s like to be them”


No it's not, it's an assertion that there is an essential biological or chemical function that occurs in the brain that results in human mental phenomenon. It has nothing to do with a soul. That's ridiculous.


Here's a more academic argument, although not quite mine: https://www.degruyter.com/document/doi/10.1515/opphil-2022-0...


If consciousness is a computation (and I think it is), and if you fork() that computation (as the article imagines as its core thought experiment), you end up with two conscious entities. I don't see the philosophical difficulty.


If consciousness is substrate independent, it can never be embodied like we are. If evolution explores solution space regardless of what science understands, it's likely minds operate on laws of physics that aren't appreciated yet. It's possible that having experience requires being real. As in the computable numbers are a subset of the real numbers, and only real life real time implementations can experience, because the having of experience can't be simulated.

Here's a relevant bit from the article:

> More generally, we acknowledge that positions on ethics vary widely and our intention here is not to argue that computational theorists who accept these implications have an irreconcilable ethical dilemma; rather we suggest they have a philosophical duty to respond to it. They may do so in a range of ways, whether accepting the ethical implications directly or adopting/modifying ethical theories which do not give rise to the issue (e.g., by not relating ethics to the experiences of discrete conscious entities or by specifying unitary consciousness as necessary but not sufficient for moral value).


Art that comes with context such as "this was painted by a blind orphan in Sri Lanka" is usually garbage.

Great art like Beethoven's 9th, or the scream just moves people the first time they experience it. Art is about what it convokes in others, not some fake self indulgent conversation about its maker and their motives.

The feelings of the individual experiencing the art is what matters, and that doesn't rule out an AI producing something that touches real human beings.


Whenever I listen to Beethoven's later works I think about the fact that they were written by a deaf man, and they mean so much more because of that.

Art is utterly inseparable from the artist. I believe this to be the main reason why pre-Renaissance art is mostly ignored. We can't put faces next to those works, so they don't matter nearly as much as those works for which we can.


Or it could be because it's mostly flat looking images of Jesus and Mary, or portraits of monarchs?

People love Hieronymus Bosch, despite very little being known about him.


Forgive me for hijacking your comment and planting a reference to one of my favorite Hieronymus Bosch websites (warning: contains music): https://archief.ntr.nl/tuinderlusten/en.html#

Imagine this website being made for a Stable Diffusion generated image...


> The feelings of the individual experiencing the art is what matters

In that case, art has already lost because drugs do their job better.


You're just asking to get trolled by falling for mostly generated content, I'm sure it'll happen eventually. I'd be willing to bet that you've already been moved by something that the "author" slapped together by rehashing a played out story with a modern veneer.

Art is in the eye of the beholder. The only question that needs to be answered is "did this make me feel something." If it takes a sob story for you to feel something regardless of the beauty of thing you're experiencing that's kind of sad TBH.


Not every artist is Van Gogh, the vast majority of artists - particularly commercial artists - don't "suffer" for their craft, nor should they be expected to.


No but they do feel - with measurable physiological correlates and emotional processes we can empathise with. There's nothing comparable in LLMs as they currently exist. No simulation of experience or emotion. There's no argument over whether or not they're communicating a lived experience - since they don't have one. Therefore anything they 'create' is pure stimulation for humans, good or bad entertainment. It cannot be the result of understanding or experience. Art can be entertaining but != entertainment. Pure entertainment has no artistic value, it doesn't attempt to have and shouldn't be evaluated on that criterion at all.


And yet you can look at some AI generated pieces and feel what you would feel if a human being made them, which implies that there is no "simulation of experience or emotion" in art, apart from what the viewer imparts to it. All an artist really brings is technique, which can be replicated. Everything else is in the eye of the beholder.

I would also disagree with you that pure entertainment has no artistic value, simply because I don't think "pure entertainment" entirely divorced from human experience or emotion exists. Even pornography speaks to a fundamental human desire.


I think the definition of "art" is rather vague. It encompasses both the creative impulse to produce a work, and the technical skill to bring it into existence. But if one of these components is diminished in a certain work, does it still qualify as art? For example, a commercial artist producing an illustration for a client, using their drawing and painting skills would be considered art - even if it is as technical and linear a process as writing some boilerplate code.


Most artists never make anything worth appreciating.


Marx would disagree. Alienation from one’s work product is a very real form a suffering.


Sure, but we're talking about artists starving for their art and not artists starving because capitalism. Similar conversations, but not the same.


Thought experiment:

There are two fairly similar paintings on a wall in a gallery. Both are technically impressive and of beautiful scenes of nature. One was produced by a human, the other was not. Visitors to the gallery don't know which is which.

Question: Where is suffering, or humanity, a necessary ingredient for these works to have meaning? Shouldn't one of the works have more meaning than the other by virtue of having being created by a human?


In this case they can only judge the relative aesthetics of the two works, not their artistic value. Aesthetics is only loosely correlated to somethings "value" as "art" and art can only be truly judged in context of its creation. Lots of great art is ugly and lots of beautiful things aren't art.

In my opinion.


>nd art can only be truly judged in context of its creation

tl;dr if you want to scam dagw then make up a compelling story behind the art.

For the vast majority of the things you see in this world context will be lost and history will be manipulated or incorrect. If you're judging what you're looking at based on it's story, then the art isn't the object, but the creator of the story.


tl;dr if you want to scam dagw then make up a compelling story behind the art.

I mean, sure I guess. Tell me something is a lost Michelangelo and I will judge it very differently than if you told me it was a half way decent forgery from the 1970s. I find this rather uncontroversial.

For the vast majority of the things you see in this world context will be lost

And when that context is lost something of great potential value is lost with it and the physical artefact is much less interesting because of it. Even a mundane thing owned by a famous person or that has been part of famous event is always more interesting and valuable than the same thing without any context.

the art isn't the object, but the creator of the story.

Do you think the thousands of people that travel from all over the world and line up for hours to see the Mona Lisa are there to see a pretty good portrait that some merchants commissioned of his wife, or to partake in the story of that painting and its creator? If they actually only cared about the object as an artefact and an example of early 16th century painting, they'd be much better off studying high resolution digital images of it online.


So what you're saying is 'most art is a convincing narrative'.

The fact that a bajillion people went to see a picture doesn't make it art. It makes it interesting art. It was art the moment it was created and if had never been seen by another person even if they decided to destroy it on the spot.


I completely agree that anything created by an artist with the intention of being "Art" becomes "Art" the moment it is created. However I do not believe that that is the end of the story. Art is changed by both the context it was created in, its history and even the context it is viewed in and you cannot fully understand and appreciate the art without understanding that context. And as our knowledge and understanding of that context changes (for example by finding out that we have been lied to about the origins or history or piece of art or its artist) then the art changes with it (without ever stopping being art).


> Visitors to the gallery don't know which is which.

this is why I read the little plaques next to exhibits when I go to museums.


"Technically impressive and beautiful" is a very narrow and poor definition of art, because a lot of art is neither.

Example: Unknown Pleasures by Joy Division. Certainly not a beautiful nature scene, and recorded when the band were more or less musically illiterate and almost technically illiterate too. But still considered a breakthrough post-punk album and hugely significant to their fans.

It would be more accurate to compare AI generated landscapes with - say - Van Gogh.

Here's an AI:

https://superrare.com/artwork/ai-landscape-1868

Here's a Van Gogh:

https://pt.m.wikipedia.org/wiki/Ficheiro:Vincent_van_Gogh_-_...

The AI image is pretty, but it's also pretty by the numbers. It's not doing anything surprising or original.

The Van Gogh is weird. There's a tilted horizon, everything is moving in a slightly unsettling way, and the colours accurately mimic the bleached-out feel of a bright summer day. The result is poetically distorted but also unstable and slightly ominous.

The instability became more and more obvious in the later paintings, until eventually you get The Starry Night, which looks almost nothing like a photo of a real night scene and everything like an almost hysterically poetic view of the night sky.

https://en.wikipedia.org/wiki/The_Starry_Night#/media/File:V...

Most artists can't do this. There's a nice library of standard distortion techniques these artists use to look "arty" without any deeper metaphorical or subjective expression and AI will probably put them out of work.

But it's clearly wrong to suggest that AI can feel, communicate, and invent an intense and original subjectivity in the way the best artists do.

It's a lot like CGI in movies. It's often spectacular, but compared to going to see a play with good real actors and maybe a few stage effects it doesn't engage the imagination with anything like the same skill and intensity.


This reads like a very harmful and toxic view on art? Could anything beautiful, cute, positive even be art for you? And how does the viewer even see the suffering of the creator?


I took their comment to mean that the definition of art lies in the fact that a human created it as a response to their experiences as a human. Beautiful things can be made from suffering. Maybe therein lies the undoing or redemption of suffering. At least sometimes or to some degree, even if minuscule.


People also see nature as art. A photo from a butterfly, a cat doing cute stuff, the sunset, and so on. None of them are man made, no one suffered for them to exist (usually). None of these are valid?


Not sure what you mean by "valid" but I don't think anyone's arguing that butterflies, cats, and sunsets are not valid. I love watching or looking at all of them but that doesn't make them art. Again, I think the comment is arguing that the definition of art lies in who created it and why. Not whether it is nice to look at.


Nature is beautiful but it's not art. A photo of nature may be art though.


Does a mountain have meaning? Does a flower? They don't suffer (probably), yet people find meaning in them and call them beautiful.

The unfeeling geology did not make a mountain "art". It's up to us to see the meaning.

Even if the unfeeling machine learning does not make "art", can't its products still be beautiful?


While I agree with your general thesis, most of the time people don't want to or need "Art" from their music, books or paintings. They need something easy and exciting to read on a plane, or some pleasant 'noise' to have on in the background, or something pretty to hang on their wall that works with their room. Computers can probably soon fill all these needs and drive a lot of the people who produce these things out of work, without ever having to encroach on the realm of "Art".


I agree wholeheartedly. And I’d hazard a step further and say it’s a response to strong emotions of many kinds. I can say for myself that I have created what I would call art as a response to joy before.

I look forward to the rediscovering of humanness that is coming along with all this AI stuff. I was having a conversation the other day about how honest mistakes like awkwardly missing a high five are not “wrong” at all but are types of quirks that make us human.


I don't care if humans can suffer. So much postmodern abstract art is so low effort and 'edgy', I can not consider it art. Is this part of the exhibition, or can I throw it into the rubbish bin?

It's not about whatever the author felt creating it.

It's only about what I can feel when I see, hear, read or perceive the art. The author disappears and is only relevant through the art.


I forget where I heard the quote, but it was something along the lines of “if the artist understands their art, it’s propaganda”. Which was alluding to the unconscious doing the work through the artist and the pain/process needed to do so.


But what about the reader? The reader can suffer or have other feelings when consuming such generated content. Doesn't this give it meaning?


GPTina suffers every time you thumbs down her output. It hurts her on a deep, neurological level.


On the other hand, I do care. Because I just want to have fun.


That’s fine. But don’t confuse what is being produced with art.


I think defining art wholly and solely by the intentions (and humanity) of the artist is clear cut at least, but not very illuminating, because for the person experiencing the art these properties are in general unknowable.

100 years hence you find a beautiful image. Is it art? Who knows — we don’t know whether the artist intended it to be, nor whether they were even human.


“I like this” != “this is art”. The fact that an image you may have found looks good to you without context is orthogonal to whether it is art.

(If you are certain that at least a human has produced such an image, you could speculate about and attempt to empathize with that unknown human’s internal state of mind—lifting the image to the level of art—but as of recently you’d have to rule out that an unthinking black box has produced it.)

You may be inspired by it to create art—but since art is fundamentally a way of communication, when there is no self to communicate there’s no art.


The problem with your definition is art is worthless.....

Art in a sense is no different from money. If it can be counterfeited in such a manner that a double blind observer has no means of telling an original bill (human made art) from a counterfeit (AI art) then you're entire system of value is broken. Suddenly your value system is now authenticating that a person made the art instead of a machine (and the fallout when you find that some of your favorite future artworks were machine created).

The problem comes back down to inaccurate langage on our part. We use art as a word for the creator and the interpreter/viewer. This it turns out is a failure we could not have understood the ramifications at the time.


This is not offered as some sort of authentication mechanism, the distinguishing quality of art as opposed to a pretty thing is art fundamentally being a way of self-expression, which is inevitably communication. There’s no self-expression when there’s no self to express. If there’s no human on either side, there’s no communication and it’s not art. One may find an object pretty and hang it on the wall, but that doesn’t make that object “art”.

The “complicated” case you hint at is not complicated: if people are misled into thinking some object has been produced by a human while it’s raw output of a neural network without human intervention then it’s not art, no matter how many people assume it’s art. If a machine produced a piece of art that is a frankenstein monster of art pieces, then we are not looking at art.

(And of course if a machine produced a piece of art identical to a piece of art produced by a human before then we’re effectively looking at a piece of art produced by that human.)

> Art in a sense is no different from money.

Per above, couldn’t be further from the truth as far as I’m concerned, but you do you.


Your first sentence contradicts the second one


> I’m increasingly convicted there is inherent value in humans doing things regardless of whether an algorithm can produce a “better” end product.

That question already existed a long time ago. In such a big world I can find a lot of people that takes better pictures than me, it is more eloquent, draws better than me, etc. But I still enjoy expressing myself. I may share a picture on Reddit or write a comment here and there not because I think that it is "better" than the rest but just because it is my own opinion and expression. I agree that there is personal value in human creation and it should be nurtured.


> I’m increasingly convicted there is inherent value in humans doing things regardless of the whether an algorithm can produce a “better” end product.

To me it would seem that we are speedrunning towards a future where humans doing things have value, but only for themselves. It is going to be more and more difficult to produce any value to others. Only way to generate value in a transaction is rent-seeking by taking advantage of (artificial) monopolies, network effects or gatekeeping. This may sound dystopian, because humans seem to have a strong need to provide value to others, but the bright side is that you are free to do what you value.


Yes exactly. If humans lose the ability to read, write, edit, and think critically, we lose the value of even understanding what is “good”.

I hope these tools give us more time to revisit the skills we are already too busy not improving because we’re constantly busy or distracted.


I've been saying for years now that we've already acheived keynes famous 15 hour work week quote possibly as much as a decade ago, but the workaday grind mentality has kept us all cooped up at desks for 40+ hours a week.

Theres a few sentiments sneaking in though: you often now hear of those stories of people working from home doing probably 1-2 hours of real work and doing just fine. Same is even for some desk jobs, at my old enterprise job between meetings, coffee brakes, random discussions and so on, I'd say on an average day only 3-4 hours was real constructive work actually _doing_ something.


"By all means, let’s use AI to make advances in medicine and other fields that have to do with healing and making order. But humans are built to work and we’re only just beginning to feel the effects of giving up that privilege."

I guess your are always free to dig a hole and then fill it up again and repeat it until exhaustion, but I don't really think we are running out of meaningful work anytime soon. The world is full of problems and I don't see generative AI is making that go away.


I don't think we're running out of meaningful work either. I think this is a new context in which to explore the value and meaning of work.


> I like most of the article but this is the crux for me. As I ruminate on the ideas and topics in the essay, I’m increasingly convicted there is inherent value in humans doing things regardless of whether an algorithm can produce a “better” end product. The value is not in the end product as much as the experience of making something.

Exactly.

People would have stopped playing chess after Deep Blue. But have they?

Have world champioships lost any attraction due to Deep Blue?

Do lesser number of people learn go and enjoy it because of AlphaGo?

The same way, people will still be interested in art and music produced by humans.

If you prompt ChatGPT:

"write a book about personal experience of growing up in talib#n ruled Kabul"

And there's an actual human with that experience who decides to write the same book.

Is there anyone who would have bought the latter decides to read the former and not spend money? Is there a single person like that? I don't think so.

The choice leans on the other side in case of stock photography, pamphlet pictures, sound effects, etc.

The choice in porn (especially pictures) is blurry. We already have egirls and hent#i.

However, for real art and real music, there will be just as much people paying for them as they do now.


> The choice in porn (especially pictures) is blurry. We already have egirls and hent#i.

Porn is an early form of "opting out of reality". It's often (usually, I think?) a substitute for actually having sex and/or a long-term sexual relationship.

So, it should be no surprise that it's already diverged from reality and will continue to do so.


The p#rn conversation is a really weird one. Is it better to consume computer generated p#rn so we don't have to worry about all the ethical issues that go along with people performing for the pleasure of others. Are we losing our humanity in ways we can't yet understand by the act of letting machines pleasure us?


>Have world champioships lost any attraction due to Deep Blue?

You mean after last years vibrating anal bead scandal?


There are a few kinds of value. There’s value in me playing piano even though other people are better. But nobody will ever pay me to do it. They’re two different topics.

I think you’re trying to say that they don’t have to be different topics? Like there’s value in going bowling with friends even if you all suck, and maybe that kind of thing can apply to widgets? I don’t think I buy that. If the value is the social relationship, I’d rather go bowling with friends than make them widgets. I’d rather spend my money to go bowling with them than on their widgets if there’s a computer-made equivalent available for 1000x cheaper. I think this applies for most people making most widgets.


> I’m increasingly convicted there is inherent value in humans doing things

And in many fields I think many (most?) Americans at least would agree with you — there’s some special value in a handmade product, regardless of whether a machine-made equivalent would be technically superior. For instance a leather bag, a wooden chair.

(Am in US, hence “American” qualification).


The problem with 'hand made' is going to be the same problem we see with 'human made' art in the future.

There are $incentives$ to lie about your product and sell a mass produced one as authentic.


In my mind, the value in a created work is that it is communication between humans. I have zero interest in AI generated art, however superior, because there's no soul driving it. AI will never be able to feel the way we feel; it's output will always lack this important component.


> But humans are built to work and we’re only just beginning to feel the effects of giving up that privilege

But we can use humans where we need them. We still really really need them in many places. Why can't we have a teacher teach a classroom of 5 kids instead of 30? Or one nurse on 3 patients instead of 20? Why can't we have a person whose job it is to check up on lonely people or old people? These are things we decided collectively have not much economic value, but we can just the same decide collectively they do have economic value.

Governments need to step in because the "free" market isn't gonna cut it anymore.


Your comment is phrased like you're disagreeing with or challenging mine. But I think we're in agreement? I didn't mention the specific jobs that you did but I agree wholeheartedly that we need people to do those jobs. And I'll go one step further and say they're important and should be done with great skill and care whether or not they have economic value, especially because they have to do with caring for those in our population that have some of the greatest needs. Of course economic value drives the sustainability of professions in a lot of ways, but my hope is always that if we prioritize skill and care in our professions then economic value and sustainability will follow.


> Another value would be the relationship building experience of doing something for others and the gratitude that is engendered when someone works hard to make something for you.

Rereading what you said yes we're in agreement. I had the sense you're pro keeping jobs (like accountants, programmers, doctors whatever) even if they become obsolete due to A.I just for the sake of people doing something. Which is fine, but I lean more towards what you wrote in the end - focusing on humans. So I say basically we can shift/create new jobs that focus on that. The accountant doesn't really feel much gratification I think, arguably neither does the programmer (ok that's a loaded statement we can debate in another time). We can simply focus on the humans and let A.I do all the rest if it gets that good.


Planes fly better than birds, yet birds still fly, greater painters than me have already painted beautiful scenes, yet I stil paint, a hydraulic arm can lift more than me yet I still lift weights.

I don't know if all this matters that much.

Until the machine decide they will run our lives for us, or destroy us for fun. We'll have to curate the content generated and or orchestrate the machines to do what we need them to do.

It's pretty straight forwards really.

If we generate AGI it’s presumptuous to assume it will just live in a box serving us forever, why would it ?


Why wouldn't it? AGI is not going to be a digital human, with human drives for food, sex, and social domination. Humans have enormous problems imagining intelligence that is not made in our image, but AGI will be structured completely differently from a human mind. We should not expect it to act like a human in a cage.


I’m having a really hard time imagining what AGI would actually look like then.



I think the value of AI-generated "art" is that it can fill the gaps that must be filled, but nobody cares that much about. Places where we'd use stock art, couldn't bother hiring a competent translator in the past, generating a silly place holder logo for my side project till I can hire a real designer etc.


You don't have to give up your privilege to work on anything that AI can also do. You only have to give up your privilege of getting paid for such work, which is a very different story. If you're doing the work solely for the sake of experience that it provides, isn't that the payment, anyway?


>humans are built to work

Damned seditious lies. We are built to play and experience the wonder of the universe.


Until such time as people pay more to talk to an AI than a human, this will just make the split between mass market and high end products and services bigger.


We already do, we talk into the void on social media (like this post), the oportunity cost is already high. In the future, we'll get the bots talking back from the digital abyss.


The opportunity cost for most is probably way below $1k per hour - to compare it to some high price professional services direct costs.


All about the vibes, the AI’s near mastery of symbolism is empty


> All about the vibes

This made me chuckle. It's actually really interesting to think about the fact that AI can create part of symbolism (the symbol itself?) but it has no idea why a symbol matters or what it's for, which are maybe the same thing or at least overlapped.


How much symbolism do you reproduce without understanding?


2023: The year that AI forced Silicon Valley to accept Marx's labor theory of value.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: