Hacker News new | past | comments | ask | show | jobs | submit login

somewhat of a side-note: It's interesting to me that the first couple of sentences of the AI podcast sound 'wrong', even though the rest sounds like a real podcast. Is this something to do with having no good initial conditions from which to predict "what comes next"?



The other thing I've noticed is that, as expected, they're stateless to some degree, so while they have some overall outline of points to hit, they'll often repeat some peripheral element they already talked about just a minute before as if it's a brand new observation. It can lead to it feeling very disorienting to listen to because they'll bring up something as if it's a new and astute observation, when they already talked about it for 90 seconds.


This sounds like quite a few podcasts, ironically enough.


The whole thing has a kind of uncanniness if you listen closely. Like one podcaster will act shocked by a fact, but then immediately go to provide more details about the fact as if they knew it all along. The cadences and emotions are very realistic but there is no persistent “person” behind each voice. There is no coherent evolution of each individual’s knowledge or emotional state.

(Not goalpost moving, I certainly think this is impressive.)


> Like one podcaster will act shocked by a fact, but then immediately go to provide more details about the fact as if they knew it all along.

Some podcasters actually do this. For example, I've noticed it in some science podcasts where the goal is to make the audience feel like "gee whiz that's an interesting fact." The podcaster will act super surprised to set the emotional tone, but of course they often already knew that fact and will follow up with more detail in a less surprised tone.

That doesn't mean this isn't a bug. But stuff like that reminds me that LLMs may not learn to be like Data from Star Trek. They may learn to be like Billy Mays, amped up and pretending to be excited about whatever they're talking about.


E.g. "Acquired" tends to have this since both co-hosts research the same topic. I think they try to split up the material, but there is inevitable overlap. They have other weird interactions too, like they are trying to outsmart each other, or at least trying not to get outsmarted.

Some podcasts explicitly avoid this by only having a single host do research so the other host can give genuine reactions. E.g. "You're Wrong About" and "If Books Could Kill".


Interesting, that makes sense. I haven't listened to a lot of podcasts, but most of them were interviews, where the two speakers genuinely had different knowledge and points of view.


I do think there's also just a sort of natural goal-post moving when you're talking about something that's hard to imagine. The best comparison in my mind is CGI in movies. When you've never seen something like the Matrix or Lord of the Rings or even Polar Express before, it's wild, but the more you see and sit with it, the more the stuff that isn't right stands out to you.

It doesn't mean it's not impressive, but it's hard to describe what isn't realistic about something until you see it. A technology getting things 90% right may still be wrong enough to be noticeable to people, but it's not like you could predict what the 10% that's wrong will be until you try it, and competing technologies may not have the same 10% that's wrong.


Did you catch where she misreads “what I-S progress?”


lol ya, thought that was funny as well




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: