It's interesting that the threat signal can be subconscious, ie you notice a threat but the signal isn't strong enough to cross over your conscious threshold. But if the computer notices the same thing it will act. Pretty kurzweilian in that they're augmenting our (pretty good) brain neural networks rather than replacing them, with the result being an order of magnitude better than either alone
Incredibly cool idea. Though I wonder if they couldn't use a well trained animal instead of a human as the 'threat detector'. This would allow more direct access to the brain (implanting electrodes in soldiers is not usually acceptable). It would also allow use of multiple animals to lower error rate.
However, a quick literature search indicates that there is not something as easy to monitor as P300 in animals so perhaps this is not feasible.
No idea how many things they're testing but it does mention:
In testing, the 120-megapixel camera, combined with the computer vision algorithms, generated 810 false alarms per hour; with a human operator strapped into the EEG, that drops down to just five false alarms per hour.