Hacker News new | past | comments | ask | show | jobs | submit login

> With LLMs, we can give the exact same instructions, and not be guaranteed the same code.

Set temperature appropriately, that problem is then solved, no?




No, it is much more involved and not all providers allow the necessary tweakings. This means you will need to use local models (with hardware caveats) which will require us to ask:

- Are local models good enough?

- What are we giving up for deterministic behaviour?

For example, will it be much more difficult to write prompts. Will the output be nonsensical and more.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: