Hacker News new | past | comments | ask | show | jobs | submit | voxelc4L's comments login

It begs the question though, doesn't it...? Embeddings require a neural network or some reasonable facsimile to produce the embedding in the first place. Compression to a vector (a semantic space of some sort) still needs to happen – and that's the crux of the understanding/meaning. To just say "embeddings are cool let's use them" is ignoring the core problem of semantics/meaning/information-in-context etc. Knowing where an embedding came from is pretty damn important.

Embeddings live a very biased existence. They are the product of a network (or some algorithm) that was trained (or built) with specific data (and/or code) and assume particular biases intrinsically (network structure/algorithm) or extrinsically (e.g., data used to train a network) which they impose on the translation of data into some n-dimensional space. Any engineered solution always lives with such limitations, but with the advent of more and more sophisticated methods for the generation of them, I feel like it's becoming more about the result than the process. This strikes me as problematic on a global scale... might be fine for local problems but could be not-so-great in an ever changing world.


Not that I'm arguing one way or another, but everyone posting "Hanlon's Razor, QED" should consider that Hanlon's Razor is 1) a heuristic and 2) breaks down _very_ quickly around psycho/sociopaths.


Also, when the incentives are worth billions of dollars and the players are the biggest names in tech worldwide.

Read about any kind of historical coup and there's all kinds of both 1) incompetent fumbles and 2) elaborate subterfuge.


Why does something have to be full-on existential threat to humanity to qualify for research/development/treatment?


It doesn’t? I never said that.


Is the population that visits your website more biased than average?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: