I had this opinion for a long time, but only recently was I personally affected, but that made me even more convinced.
I was listening to my new releases playlist on Apple Music and listened to a track that sounded nice, but also a little generic. I don’t know exactly what prompted me to check, but it had all the signs of something fishy going on like generic cover image, the artist page showed a crazy output of singles last year (all the same generic images), unspecific metadata and - to my surprise - I found other Reddit posts about this artist being AI.
Now, a lot of music is generic and goes through so many hands you can hardly call it a personal piece of art. But even then, there’s always some kind of connection.
I guess that’s why I felt betrayed.
I thought AI generated art was wrong before, but I didn’t expect to feel this mix of anger and disappointment.
For me, music (like all fine art) is about human connection. It's the artist telling me something human and personal. It's not entirely about the aesthetics of the music. The provenance of the art is very important. If I feel that connection with a song and it turns out that the song wasn't made by a person (it hasn't happened yet as far as I know), I have been deceived and would be furious.
A song made by a person using AI as tool (rather than to generate the music) is different. What matters is that the song is actually an expression of humanity, not the tools used to make it.
However, the presence of AI-generated music means that I am not really willing to buy music anymore unless it's either a few years old or I'm buying it at the merch table the artist has at a live performance.
We're in the very early stages of AI generated art. What will it be like in 10 years time? 20? 50? You might think it won't get much better. I think that's unlikely.
And if there are aliens? I'm being serious. Why does it have to be human intent?
And I think it is entirely feasible that at some point -- how far away, I don't know -- AI becomes superior to us in its appreciation of life and living.
They literally thought it was odd and generic, checked, and found it was AI and got pissed off. How is that "emotional cope"? They correctly guessed something was weird about it based on how it sounded!
Making music involves writing and arranging the score, playing the instruments, writing lyrics, singing, recording, mastering... each of these steps is hard work, takes practice, and by that fact alone will be unique to every person because, in some way, their whole life experience flows into it.
GenAI music is writing a few words about how you'd like the result to sound. That's it. That's the entire original contribution. There is no individuality to it beyond that, because it's not someone making music, it's someone deciding about the weights in averaging together existing music that other people have actually worked for.
That's not nonsense to me (and judging from the reaction to this news, neither is it to a large number of music fans). It's an absolutely massive, huge, decisive, qualitative difference.
do you now see how reductive you are of the technology behind genAI? by reading your own comment does it not drive the point home that reducing music appreciation to the quoted comment is also hilariously reductive?
Didn’t look at it too closely, but the whole article as it stands is almost completely copy-pastable from a llm chat. Another comment pointing out that there’s some code that doesn’t do anything is another clue.
(Not saying it was, but if I’d ask the llm to create and annotate a HTML manipulation poc with code snippets, I’d get a very similar response.)
Edit: Pretty sure the account itself is only here to promote this page.
I believe for the next Half-Life, latest rumors indicate it is actually back to 2D. During the press event last month, they were also pretty clear that no VR game is currently in development at Valve.
A huge missed opportunity imo, but maybe playing HL3 on a theater sized screen is nice enough.
I'm sure they've tried making it hybrid, aka VR optional. I'm curious if they'd be able to make it work. If not, I don't expect a VR only HL game again.
Some rumors from ~1yr ago indicated they were looking into making it an asymmetric co-op game where one player would be Gordon Freeman on PC and one would be Alyx in VR. Of course, they could have dropped that by now.
Seems though as if the WPT score is not super meaningful in measuring actual usability. The growth of passed tests seems suspiciously uniform across browsers, so I guess it has more to do with new passing tests being added and less with failing tests that got fixed.
A large amount of tests includes rendering text and basic elements correctly, which is an incredibly difficult problem. Getting JS to render right is one thing, but preventing bugs like "Google Maps works but completely breaks when a business has õ in its name" requires a lot of seemingly useless tests to pass.
Fixing a few rendering issues could fix all of the tests that depend on correct rendering but break, so I think the rate at which tests are fixed makes a lot of sense.
https://wpt.fyi/results shows that even the big players have room for improvement, but also has a nice breakdown of all the different kinds of tests that make up the score.
>We’ve continued to make solid progress on WPT this month. There has been a significant increase in passing subtests, with 111,431 new passing subtests bringing our total to 1,964,649.
The majority of this increase comes from a large update to the test suite itself, with 100,751 subtests being added - mainly due to the Wasm core tests being updated to Wasm 3.0.
They fixed ~10k tests, but indeed this month is a bit of an exception as there were lots of new tests added.
Imagine an IKEA robot, they could redesign their kitchens to fit it, as well as all of their other products. I'd never step into my kitchen again so why would it need to be made for me anyways?
(They could give the robot instructions on how to set up their furniture as well, the business plan really writes itself)
I got access to Kiro from Amazon this week and they’re doing something similar. First a requirements document is written based on your prompt, then a design document and finally a task list.
At first I thought that was pretty compelling, since it includes more edge cases and examples that you otherwise miss.
In the end all that planning still results in a lot of pretty mediocre code that I ended up throwing away most of the time.
Maybe there is a learning curve and I need to tweak the requirements more tho.
For me personally, the most successful approach has been a fast iteration loop with small and focused problems. Being able to generate prototypes based on your actual code and exploring different solutions has been very productive. Interestingly, I kind of have a similar workflow where I use Copilot in ask mode for exploration, before switching to agent mode for implementation, sounds similar to Kiro, but somehow it’s more successful.
Anyways, trying to generate lots of code at once has almost always been a disaster and even the most detailed prompt doesn’t really help much. I’d love to see how the code and projects of people claiming to run more than 5 LLMs concurrently look like, because with the tools I’m using, that would be a mess pretty fast.
I doubt there's much you could do to make the output better. And I think that's what really bothers me. We are layering all this bullshit on to try and make these things more useful then they are, but it's like building a house on sand. The underlying tech is impressive for what it is, and has plenty of interesting use cases in specific areas, but it flat out isn't what these corporations want people to believe it is. And none of it justifies the massive expenditure of resources we've seen.
It's infuriating how slowly we're moving towards more open platforms on mobile, when you can just look at the desktop to see how much freedom has been lost.
I always think of Steam as a well done digital store. Apple meanwhile is absolutely disinterested in providing anything beyond the most basic features. Whishlists, shopping carts, curators (which could in theory provide actual quality suggestions, like real stores), more granular review data and so on would improve the experience immensely. The App Store can vote soon, but developers can't even offer paid upgrades to their apps, or do sales.
Apple always says they are necessary to ensure the safety of their users. But the App Store right now keeps app quality down and makes it as intransparent as possible.
Sometimes I wish VS Code had something like Code Bubbles, where it becomes much easier to see how pieces of code are related. I think it would make AI assisted coding much easier as well, since often the main challenge is having to piece together how changes across multiple files work. There has to be a lot of potential for better interfaces aside from a chat sidebar.
I had this opinion for a long time, but only recently was I personally affected, but that made me even more convinced.
I was listening to my new releases playlist on Apple Music and listened to a track that sounded nice, but also a little generic. I don’t know exactly what prompted me to check, but it had all the signs of something fishy going on like generic cover image, the artist page showed a crazy output of singles last year (all the same generic images), unspecific metadata and - to my surprise - I found other Reddit posts about this artist being AI.
Now, a lot of music is generic and goes through so many hands you can hardly call it a personal piece of art. But even then, there’s always some kind of connection.
I guess that’s why I felt betrayed.
I thought AI generated art was wrong before, but I didn’t expect to feel this mix of anger and disappointment.
reply