Hacker News new | past | comments | ask | show | jobs | submit login

I have no idea why there is not an accurate calibration of polygraphs published. Here's a straightforward calibration:

Have the subject write down a random number from 1 to 1024. Perhaps this random number is assigned via a phone app. Have the subject put the paper in his pocket.

An maximally accurate polygraph will require no more than 10 questions to determine the number in the subject's pocket.

[However, I think we all know what the result would be, which is why nobody publishes a simple calibration like the above: It is not in an trained examiner's interest to expose the inaccuracy of polygraphs.]




Trying to keep a random number secret likely is less stressful than an actual lie. Even if lie detectors worked this wouldn’t be a great approach. My way is better, scrap them for parts and buy beer for whatever is left over.


> Trying to keep a random number secret likely is less stressful than an actual lie.

Which begs the issue: If a lie detector is accurate only for answers that are stressful, then it sure looks like we need a detector to figure out if an answer is stressful or not.

Or to put it differently, now you have two problems: Lie detection and stressful-answer detection.

Nevertheless, I concede that your beer detection plan is superior.


Indeed it occurred to me when I took my LDTs that the test's basic premise is: our machine detects stress responses, and our questioner assumes repeated stress coincident with the same answer equates to a lie. Stated outright like that, everyone can see the assumption behind the test is obviously, irredeemably flawed.

Therefore I too must concede that the beer plan is flawless.


What about if you keep it secret you get $1000


There's a bias in gaining value you don't have and losing value you already have. People are much more risk adverse when losing value, even if small. So you'd have to give them the $1k first, probably for performing some task. Put the money in their have, continue with the distraction test to let the value sink in. THEN if they fail they lose the money. And even then this won't fully match the risk aversion models, but it'll better simulate it. These things are hard to measure.


The goal isn't to detect lies; it's to detect which lines of questioning make you nervous.


Committing to answer "yes" to every question without registering it enough to even understand it would make your polygraph "fail" even if it actually worked well in real usage.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: