Hacker News new | past | comments | ask | show | jobs | submit login

Could it be done with “scroll around on a map” performance? I would think you’d have to process the whole Wikipedia data set ahead of time. I don’t know if that’s difficult or not.



I would bet it could be made to be that performant.

You wouldn't need full GPT-4 performance to do this sort of sentiment analysis, a cheaper, faster model would work fine.

Furthermore, you wouldn't need to ingest the whole article; just the intro paragraph would be fine. Overwhelmingly, the information needed to determine if this is the sort of article suitable for inclusion is in the intro section at the very top of the page; just one or a couple paragraphs.

Add to that that every page could be run in parallel, and you're looking at probably less than two seconds of processing time for loading all articles in an arbitrary location.

Bonus points that an article's suitability is unlikely to change, so this only needs to be done once, and then the article's id and suitability bool can be stored in a table somewhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: