Hacker News new | past | comments | ask | show | jobs | submit login

>> "Three, what if the AI is just wrong and starts confusing the student? Even GPT-4 fabricates things all the time."

It was bad enough when the text was wrong because of shoddy editing.




This is the purpose of LLMOps. To provide guardrails for precise output to prevent hallucination.


This is the first I'm hearing of LLMOps, please elaborate a bit on what it entails. How does it provide guardrails?


Basically, do you want a pipeline/process that utilizes LLMs and provides exactly the output you want? How do you formalize a human expert workflow using LLMs? With LLMOps. It is new, there is no standard or predefined resource that I'm aware of. At neuralnetes we are creating all of our methods ad hoc and use tight feedback loops to ensure ideal, precise output.


That sounds like technical editing but for LLM output instead of technical writers.


It's mostly software engineering.


Sounds like it's mostly air unless you can point to an LLM that doesn't confabulate.


No need to set impossible standards here, the correct thing to ask is whether the guardrailed LLM confabulates less than its competitors. That sounds like a useful job to have.


I'm not setting the standards, the person I replied to did. Anyone can make up imaginary tech that does amazing imaginary things.


It's not imaginary if it works.


All I'm saying is prove it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: