Hacker News new | past | comments | ask | show | jobs | submit login

So far for me it's been nice not having to spend one second mucking with Python (which is slow, and can be complex managing venvs etc), but the Modelfile is what's started to pique my interest. Bundling up metadata about prompts, parameters, models and hopefully later embeddings, LoRas, etc seems really promising. (Considering most people are just sharing prompts around atm)

I added a PR that supports dynamic command output injection, among other interesting things: https://github.com/jmorganca/ollama/pull/132. So you can imagine taking the output of, say, top and having the LLM parse it into useful information for you. Or having a chat session where you roll a d20 between each user interaction to determine their luck. Then being able to share that around to other people generically.

I'm also hoping at some point this project or a similar one will enable easy passing of full blown pipelines/chains of stuff around, as well as a ChatGPT style conversation sync and sharing tool.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: