But there is no message outside the rational layer when you're talking to a non-human. The only message is the amount of true information the LLM is able to output - the rest is randomness. It's fatiguing to have your human brain try to interpret emotions and social dynamics where they don't exist, the same way it's fatiguing to try and interpret meaning from a generated image.
I am sure that if you talk to a dog, it will probably take as much from your emotions as your words (to disprove your point about non-humans).
You look at it in binary categories, but instead, it is always some amount of information and some amount of randomness. An LLM can predict emotions similarly to words. Emotions and social dynamics from an LLM are as valid as the words it speaks. Most of the time, they are correct, but sometimes they are not.
The real difference is that LLMs can be trained to cope with emotions much better ;-)
Yes, fair enough about the dog - "non-human" was the wrong choice of words. But I don't agree that emotions and social dynamics from an LLM are valid. Emotions need real stakes behind them. They communicate the inner state of another being. If that inner state does not exist (maybe it could in an AGI, but I don't believe it could in an LLM), then I'd say the communication is utterly meaningless.
Well, at least to some extent. I mean, changing the inner state of an AI (as they are being built today) certainly is, because it does not affect other beings. However, the interaction might change your inner state. Like looking at an AI-generated image and finding it beautiful or awful. Similarly, talking to Miles or Maya might let you feel certain emotions.
I think that part can be very meaningful, but I also agree that current AI is built to not carry its emotional state into the world outside of the direct interaction.
Has it been tried the other way? I don't remember an iteration where they weren't obnoxiously over-endearing. After the initial novelty, it would be better to reduce the amount of fake information you have to read, and any attempt at pretending to be a human is completely fake information at this point.
You can always tell it to respond critically and it will. In fact, I've been doing this for quite a few queries after getting the bubbly endearing first pass, and it really strips the veil away (and often makes things more actionable)
This article sort of reads like they think everything should be immutable, which feels kind of dogmatic. Using `with` for everything by default seems like overkill. But in C#, lately I have been using `readonly and `init` wherever vars should not be changed after initialization, which is most of them tbh. For small, data-only objects that can easily be immutable and get passed around a lot, this makes sense to me as a way to avoid the kind of bugs they are talking about.
Ohhh, is that why I keep pressing tab and it doesn't accept the prediction lately? I thought it was a bug. It feels weird for tab to double-indent when it could be accepting a prediction - I wonder if alt-tab to do a manual indent rather than accept the current prediction might be preferable?
Edit - On the other hand, a related issue is that if the prediction itself starts with whitespace, in that case it would be good if tab just indents like normal; otherwise you can't indent without accepting the prediction.
Yes, this is really tricky, because nowadays we have people shouting from the rooftops continuously, and half of them are shouting the exact opposite thing as the other half. WWII was openly racist, so from a modern perspective it would be easy to recognize and condemn some of the early behavior, but these days it's more about dog whistling and thought crimes. Probably the signs we would all recognize are not going to happen. But we have already moved a dramatic amount in terms of normalized behavior, from 20 years ago.
Politicisation happens, though slowly. Therewasanattempt used to be funny pictures/videos and is now purely TDS. My city's sub used to have useful local content and is now about 50% national politics.
Isn't that a different phenomenon? This article is about hit songs that have cheap AI covers, almost identical sounding, presumably to poach royalties from the real musicians.
The "ghost" thing is interesting too, that sounds almost like the industry from before The Beatles, when bands were just people hired by the record companies to record songs written by others, and the companies owned pretty much everything.
There is zero evidence in the article to support the claim of AI artist. It’s far far more likely the “AI Artists” are actually Spotify funded ghost artists. The best evidence for “AI artist” is just as supportive of “Spotify ghost artists” as it is AI.
I listened to a few of the songs and I could definitely believe they are AI. They are extremely clean, near-identical covers, like karaoke versions I guess. If Spotify is funding these, that means they would be trying to poach money directly from the biggest artists/companies. That seems like a much bigger controversy than creating generic background music and spreading that around their algorithms.
The reporting suggests that the "Perfect Fit Content" scheme began in 2017 and had been rolled out on a large scale by 2023, so it's unlikely that it has been reliant on AI music. (It does seem very likely that Spotify is now at least experimenting with replacing or augmenting the ghost musicians with AI.) I don't at all accept that Spotify running a kind of self-payola system with own-brand music is only a big controversy if AI-generated tracks are involved.
I didn't mean it would be a big controversy because it's AI, I meant because they would be replacing major label/artist songs with their own karaoke versions, and then manipulating their own algorithms to promote them. That seems like something the labels would really fight against.
But they don't have to target individual musical acts or individual songs for replication to drain their purses. Time spent listening to Spotify's own-brand lo-fi is time not spent listening to playlists full of expensive third-party musicians, including musicians in whole other genres. And if they did want to make and promote close covers of individual songs then they'd probably call humans: people are already very good at that (many such covers exist already) and (IANAL!) the legal risks are probably smaller and better-understood. After all, copyright defences of unlicensed generative AI seem to rely on the notion that its output is transformative, but presumably it would be hard to make that claim when you ask an AI to produce a near-exact replica of a song you put into its training data.