1. I was just granting the GPs point to make the broader point that, for the purposes of this original discussion about these "safety declarations", this is immaterial. These safety declarations are completely unenforceable even if you could detect that someone was training AI.
2. Now, to your point about moving the goalposts, even though I say "if you could detect that someone was training AI", I don't actually even think that is possible. There are far too many normal uses of data centers to determine if one particular use is "training an AI" vs. some other data intensive use. I mean, there have long been supercomputer centers that do stuff like weather analysis and prediction, drug discovery analysis, astronomy tools, etc. that all look pretty indistinguishable from "training an AI" from the outside.
1. I was just granting the GPs point to make the broader point that, for the purposes of this original discussion about these "safety declarations", this is immaterial. These safety declarations are completely unenforceable even if you could detect that someone was training AI.
2. Now, to your point about moving the goalposts, even though I say "if you could detect that someone was training AI", I don't actually even think that is possible. There are far too many normal uses of data centers to determine if one particular use is "training an AI" vs. some other data intensive use. I mean, there have long been supercomputer centers that do stuff like weather analysis and prediction, drug discovery analysis, astronomy tools, etc. that all look pretty indistinguishable from "training an AI" from the outside.