there is no framework whatsoever for neural monitoring. This is willfully dishonest, and more than a bit worrying to see trotted out as a defense of one of the most invasive technologies currently in development.
That's not true at all, at least in a civilian context. As others have pointed out, the fact that this is being developed by DARPA means that those legal restraints may not actually apply, which I agree is very disturbing.
First of all, it would be health data protected under HIPAA.
It could also be relevant to involuntary hospitalization, where the current standard is "clear and present danger." In general, you can't be involuntarily hospitalized for saying something like "I'm having thoughts of suicide," but could be involuntarily hospitalized for talking about a specific plan for suicide or actually attempting suicide. The idea that this technology could legally demonstrate that someone is a clear and present danger to themselves is far fetched. I'm not saying the legal system is perfect or even good, but it's not 100% stupid. Judges can and do distinguish between statistical and non-statistical evidence.
Red flag laws/ERPOs use a less stringent standard from what I understand, so it is somewhat more likely (although still unlikely overall, I'd argue) to be applicable in that case.