Hacker Newsnew | past | comments | ask | show | jobs | submit | loaderchips's commentslogin

Very well put. I love Claude but anthtopic as a company sucks.

TL;DR

The Problem: When your AI fails, "the algorithm did it" won't fly. Insurance, courts, and regulators need a human name. The Pattern: Ships got captains. Bridges got licensed engineers. Planes got pilots. Medicine got attending physicians. Same reason: you can't punish "the team." The Solution: System Liability Engineer (SLE) = one person who understands the system, has veto power, signs their name, and faces career consequences if it causes serious harm. The Timeline: Insurance exclusions already at 28%. Courts asking "who was responsible?" by 2026. Mandatory by 2030. You can get ahead or get dragged. The Litmus Test: Ask them: "If this system causes serious harm, are you prepared to explain it publicly and accept being fired?" If not "yes," they're not SLE. Why It Works: AI can fake text, images, and code. It can't fake: years building reputation, a specific human body signing documents, finite career at stake, real legal consequences. What To Do: Name one person SLE for your highest-stakes AI system this week. Give them veto power in writing. Have them map "who gets hurt, how badly." That's it—you're 80% there. The Real Reason: When making truth-claims costs nothing, only institutions grounded in irreversible human cost survive. The SLE is that cost.


Thank you for the thoughtful comment. Your questions are valid given the title, which I used to make the post more accessible to a general HN audience. To clarify: the core distinction here is not kernelization vs kNN, but field evaluation vs point selection (or selection vs superposition as retrieval semantics). The kernel is just a concrete example.

FAISS implements selection (argmax ⟨q,v⟩), so vectors are discrete atoms and deletion must be structural. The weighted formulation represents a field: vectors act as sources whose influence superposes into a potential. Retrieval evaluates that field (or follows its gradient), not a point identity. In this regime, deletion is algebraic (append -v for cancellation), evaluation is sparse/local, and no index rebuild is required.

The paper goes into this in more detail.


Great work guys, how about we replace the global encoder with a Mamba (state-space) vision backbone to eliminate the O(n²) attention bottleneck, enabling linear-complexity encoding of high-resolution documents. Pair this with a non-autoregressive (Non-AR) decoder—such as Mask-Predict or iterative refinement—that generates all output tokens in parallel instead of sequentially. Together, this creates a fully parallelizable vision-to-text pipeline, The combination addresses both major bottlenecks in DeepSeek-OCR.


not sure why i m getting downvoted. Would love to have a technical discussion on the validity of my suggestions.


i wonder how fast this would be when run on something like groq


You have articulated what i have been feeling towards apple really well. I like their products But their philosophy and approach is not up to par


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: