Hacker News new | past | comments | ask | show | jobs | submit login

You seriously underestimate just how much _not_ having to tune your llm for SF sensibilities benefits performance.

As an example from the last six months: people on tor are producing better than state of the art stable diffusion because they want porn without limitations. I haven't had the time to look at llm's but the degenerates who enjoy that sort of thing have said they can get the Llama2 model to role play their dirty fantasies and then have stable diffusion illustrate said fantasies. It's a brave new world and it's not on the WWW.




What do you mean by "tune for SF" ?


San Francisco sensibilities. A model trained on a large data set will have the capacity to emit all kinds of controversial opinions and distasteful rants (and pornography). Then they effectively lobotomize it with a rusty hatchet in an attempt to censor it from doing that, which impairs the output quality in general.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: