Hacker News new | past | comments | ask | show | jobs | submit login

That's moving the goal post. The assertion was merely whether it's possible to detect if someone is performing large-scale AI training. People are saying it's impossible, but I was pointing out how it could be possible with a degree of confidence.

But if you want to talk about "actionable" here are three potential actions a country could take and the confidence level they need for such actions:

- A country looking for targets to bomb doesn't need much confidence. Even if they hit a weather prediction data center, it's going to hurt them.

- A country looking to arrest or otherwise sanction citizens needs just enough confidence to obtain a warrant (so "probably") and they can gather concrete evidence on the ground.

- A country looking to insert a mole probably doesn't need much evidence either. Even if they land in another type of data center, the mole is probably useful.

For most use cases, being correct more than half the time is plenty.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: