Hacker Newsnew | past | comments | ask | show | jobs | submit | ngarner's commentslogin

The L4 'Full Access' tier with a $200k daily limit is a massive liability if the 'Score' is still probabilistic. An 'Audit trail' in a pricing table usually just means a log of what happened—it doesn't provide Deterministic Enforcement of the underlying logic.

Claims of "Zero-hallucination" usually fall apart when the engine has to derive a new insight from the graph. If the LLM isn't reasoning, how are you enforcing Fidelity between the graph source and the final output?

The lack of built-in retries is a huge pain, but the bigger risk is a "successful" retry that just outputs another hallucination.

How are you defining a "success" signal for these tasks? Is it just a 200 OK, or are you planning a fidelity audit for each item in the queue to trigger those retries?


You can handle this in a few ways depending on the task. Even. adding to the prompt "double check your answer before answering" - the agent will take another turn to double check its work. You can also do this with a fresh task/prompt.

Ideally, if you are able to use code to validate (either with a test or eval) that works best.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: