Hacker News new | past | comments | ask | show | jobs | submit login

Essentially what large language models allow us to do is move through an n-dimensional space using only words. Somehow we need Google Maps for semantic spaces, but to do that we need to have a strong definition of what a "semantic space address" is and what "semantic roads" look like



This is a cool way to conceptualize it. When conversing with humans, you can explore various parts of the semantic space, but the process also involves a range of top-down and/or bottom-up mechanisms that facilitate a kind of 'hidden navigation'. LLMs, on the other hand, lack this capacity but with the right prompts, one can simulate it by knowing how to steer, which is sort of what prompt engineering feels like to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: