Hacker Newsnew | past | comments | ask | show | jobs | submit | mconti's commentslogin

One interesting angle here is the “epistemic asymmetry” point.

Even if providers comply with documentation requirements under the EU AI Act, downstream deployers still can’t realistically audit a model’s behavior at the level of training data or causal reasoning.

Curious how ML practitioners here think about this.

Is this asymmetry something that can realistically be reduced with interpretability / mechanistic transparency research, or is it fundamentally structural for large-scale models?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: