Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If there is sound (e.g. someone says something) and I'm not actively listening to it, but suddenly I have to focus on what was said then I can replay the last few seconds and catch up. I guess this is common to some level at least - it's hard to see how it's possible to communicate otherwise. After all we kind of have to keep a full sentence in our heads in order to understand it.

What I definitely can't do is replaying something that visually happened if I'm focusing on something else, or not paying attention. The same goes for other physical stimulation like touch etc. If I'm not conscious about it I can't tell what happened before the moment I focused. So, if I suddenly find myself in a slightly embarrassing position relative to someone else in a crowded pub, for example, and I didn't pay attention, I couldn't tell if it was because the way I was moving or if it was because of someone else. But if someone said something I'll just replay it. Works nicely for listening to a language I don't know that well too, I just replay it and translate it. But in a conversation there's no time to do that, so I'm stuck.

(I'm not talking about creating or remembering a sequence of events and playing that in my head as a movie - that's not a problem. But there seems to be an always-on "cache memory" for audio while there's none for visual or other stimuli. For me, at least. What I'm wondering is - do other people have "cache memory" for visual or other input interfaces?)



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: