Hacker News new | past | comments | ask | show | jobs | submit login

Experimenting with creating semantic chunks of large podcasts. Got the following chunks, https://gist.github.com/nutanc/a9e6321649be5ea9806b4450b0bd6...

Dwarkesh has 18 splits. https://www.dwarkeshpatel.com/i/151435243/timestamps

I got 171. So roughly 9 context discussions in one time stamp.






What did you use to create the chunks?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: