Hacker News new | past | comments | ask | show | jobs | submit login

This looks really useful! I'll definitely give it a try at the next opportunity. One improvement I would suggest for the future is adding an option for a locally running LLM, such as Llama or Mistral.



That would be very simple to implement and a reason why I didn’t think of it is probably because llama runs so slowly on my machine!

Thank you for the feedback and the suggestion.




The deadline for YC's W25 batch is 8pm PT tonight. Go for it!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: