Hacker News new | past | comments | ask | show | jobs | submit login

When I search for "Tom Faber", I expect to get results relating to a bunch of Tom Fabers.

When I see a knowledge panel result, I expect everything within it to relate to a single entity, regardless of name collisions. If it doesn't, that's incorrect. If it's too hard to disambiguate, it's better to not have the knowledge panel at all.

That said, I did this search just now. https://imgur.com/a/zEjrfte The top organic result is the author's X bio and has his profile pic in the search results page. To the right of it (across a gutter) is the knowledge panel about the physicist. I could see how one might get confused, but the X profile pic is just near the knowledge panel, not actually in it. So if this is what the author saw, I don't agree with the article. It'd be nice if he included an actual screenshot of what he's describing, instead of the cute "Why is Google..." thing.




> When I see a knowledge panel result, I expect everything within it to relate to a single entity, regardless of name collisions. If it doesn't, that's incorrect.

Sure, that's the goal. But it's not like this is an unreasonable thing to get wrong on Google's part. It's hard, this is not like they're failing at easy tasks here.


"Sure, that's the goal. But it's not like this is an unreasonable thing to get wrong on Google's part. It's hard, this is not like they're failing at easy tasks here."

Should unreasonable trump acceptable? If you fail at something then should you do it?

An uncharitable interpretation seems to allow it is acceptable to imply someone is dead because an algorithmic process is hard ... lol ... l8ers.

At which point is disinformation acceptable?


So we shouldn't use AI at all because it's often inaccurate? I think it's fine as long as it's made clear that it's not totally reliable, both in widespread public knowledge and in UI disclaimers.


I reckon it's reasonable to say we shouldn't use AI (LLMs) in places where we're looking for accurate information, yeah. Expectations for accuracy are higher in a seemingly curated summary.


Who would it be "curated" by if not AI?

Any form of auto-generated summary online is from AI


We don't need auto summaries if they're not actually correct. It's not a choice between AI or not AI, it's a choice between summaries or no summaries


People seem to like "mostly correct" summaries though, like most other things AI


> So we shouldn't use AI at all because it's often inaccurate?

Preferably not, yeah.


It is unreasonable when it has been reported multiple times, like the OP says they did before writing the article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: