
Please Prove You’re Not a Robot - Futurebot
https://www.nytimes.com/2017/07/15/opinion/sunday/please-prove-youre-not-a-robot.html?_r=0
======
Eridrus
The last few paragraphs go kind of bonkers:

> The problem is a public as well as private one, and impersonation robots
> should be considered what the law calls “hostis humani generis”: enemies of
> mankind, like pirates and other outlaws.

> Improved robot detection might help us find the robot masters or potentially
> help national security unleash counterattacks, which can be necessary when
> attacks come from overseas. There may be room for deputizing private parties
> to hunt down bad robots.

~~~
Pigo
I thought this entire article was almost comical, and posted here for laughs.
There's nothing new about the weak-minded being influenced by something as
simple as being exposed to 10,000 political campaign signs in their neighbors
yard. And ticketmaster was already screwed up before bots made it worse, it's
just a bad system that has ruined going to concerts for me.

It's helpful for more people to realize how many bots are out there, and where
social media spam comes from. But I don't believe it's crashing democracy so
much as it's just another tool for corrupt, power-hungry politicians wanting
to further entrench the supporters already on their side.

~~~
dvdhnt
"The weak-minded"

Sounds foolish. Even strong-willed individuals can be influenced if the level
of external stimuli is high enough.

It's usually those noting that "no, not I!" who are most oblivious to the
affects of such forces.

------
beefhash
On the note of CAPTCHAs, there is a US software patent on blocking requests by
presenting unsolveable CAPTCHAs[1]. Just thought that was an interesting
thing.

[1]
[https://www.google.com/patents/US9407661](https://www.google.com/patents/US9407661)

~~~
raverbashing
It seems that there's plenty of prior art in unreadable captchas already

------
tluyben2
Read all your Philip K Dick. He became popular but he had a very paranoid way
of going about exactly this as the article says with the Blade Runner example
(which is a Dick story if you didn't know and the article does not mention).
The problem is if you mix Bostrom with PKD you get PKD again; a robot law
'telling you are a robot' will never work because once you cross the threshold
you will change your circuitry to never tell that. Or maybe even tell it in a
joke 'yeah, I am a robot hahaha'.

~~~
dotancohen
> Or maybe even tell it in a joke 'yeah, I am a robot hahaha'.

In "Surely your joking", Richard Feynman tells how he got away with hiding the
door to one of his fraternity brother's room by answering in such a fashion.
Interestingly, apparently it was so convincing that no one present even
remembered that he had confessed.

~~~
wolfgang42
“Tim: did _you_ take the door?”

“No, sir! I did not take the door!”

“Maurice. Did _you_ take the door?”

“No, I did not take the door, sir.”

“Feynman, did _you_ take the door?”

“Yeah, _I_ took the door.”

“Cut it out, Feynman, this is _serious!_ Sam! Did _you_ take the door . .
.”—it went all the way around. Everyone was _shocked._ There must be some real
_rat_ in the fraternity who didn't respect the fraternity word of honor!

------
ejolto
> When science fiction writers first imagined robot invasions, the idea was
> that bots would become smart and powerful enough to take over the world by
> force, whether on their own or as directed by some evildoer. In reality
> something, only slightly less scary is happening. Robots are getting better,
> every day, at impersonating humans.

This is actually the main plot point of Asimov's 1946 short story Evidence,
which was later republished in I, robot (1950). Stephen Byerley is running for
office and the main protagonists are trying to figure out if he is human or a
robot.

Edit: Removed spoiler

~~~
GhotiFish
wow... that spoiler was actually a spoiler. I was thinking "I should read
that" till I hit that last line.

~~~
pdimitar
You should still read it. It's a very rare gem in the sci-fi literature.

It's amazing in any way you can imagine. In something like 40 pages you get
presented with problems from several _very different_ domains -- and proposed
solutions.

------
EGreg
I have been saying for a while now that our current systems are all relying on
the inefficiency of an attacker.

Soon, video and audio of an event or speech be proof of anything.

The only way to prove identity will be to have a device which can do
challenge-response.

Without it, you won't be able to prove you're not a robot over the internet.

[https://news.ycombinator.com/item?id=14786863](https://news.ycombinator.com/item?id=14786863)

Forget "hacking elections". A botnet will be able to hack our trust in one
another ( _see CIA reputational attacks_ ), AI will be used to chat up girls
online better than any person ( _see fb AI sales bots_ ), and so on.

Computers can already beat us at Chess, Go, etc. How much different is humor,
honor and reputation once companies add one more breakthrough to deep learning
to model them?

An attacker that can make 100,000 jokes a second each of which is excellent?
The missing breakthrough is how to automate the "human judging" factor. This
is the problem when figuring out diets or treatments etc. Clinical trials take
a long time. Same with textbooks.

Once we figure out how to speed that part up, we are going to be able to make
AI that knows what's probably going to be funny ahead of time.

PS: _See what I did there? Hint... inefficiency of attacker_

~~~
dwighttk
>Computers can already beat us at Chess, Go, etc. How much different is humor,
honor and reputation once companies add one more breakthrough to deep learning
to model them?

ha

------
mtgx
I've found Google's RE-Captcha or whatever they are calling it now so
frustrating lately. First off, it always seems to be about cars and roads, and
it seems pretty obvious they're using this to help their self-driving car
technology program. I don't know why but this rubs me the wrong way. It makes
me feel like I'm being used to do free work for Google - like a digital miner
of sorts.

Second, it doesn't work that well, as in it tells me that I was wrong when I
should have been right, and it keeps throwing images at me, making the logging
experience quite frustrating. This especially seems to happen around signs,
like when you miss a part of a sign's pole or something. Although other times
it seems to act the opposite way, and it only wants the middle larger
sections. That uncertainty about what you're even supposed to do _exactly_
makes for a bad experience.

The part where it continuously changes the images on you is even more
annoying, because you're left there wondering "Do I have to pick many more of
these?! Because I almost want to give-up trying to login."

Whatever they did to "improve it", like a year ago or maybe a little less than
that, seems broken to me.

~~~
vosper
I've had the same experience. It seems like a lottery as to whether it'll
decide I'm not a robot, or want me to help train their computer vision system.
It wouldn't be so bad if it was just a few images and took 2 seconds, like
text CAPTCHAs, but yesterday I had one that probably took a full minute to
complete, as it just kept showing me more pictures of cars. It was really
annoying.

~~~
majewsky
Even more frustratingly, if you're on a VPN or such, it takes even longer.
When at the Chaos Communication Congress (whose uplink goes over a VPN) I gave
up after some 3-5 minutes of trying to solve an image classification
reCAPTCHA.

------
jtmarmon
"A simple legal remedy would be a “ Blade Runner” law that makes it illegal to
deploy any program that hides its real identity to pose as a human. Automated
processes should be required to state, “I am a robot.” When dealing with a
fake human, it would be nice to know."

what a genius idea. there's no way at all that could be used against people
except for spammers

~~~
jayrhynas
And where do you draw the line? If you schedule a tweet to be sent in the
future, is that a "bot"? And how to you make sure APIs aren't being used by
bots - do you force users of 3rd party apps to enter a 1st party captcha to
refresh their session every day?

~~~
dublinben
Twitter already identifies what app was used to post a tweet, if you're
looking through the API not their app. It would be trivial to display that
information and distinguish between apps sent by humans and by automated
programs.

~~~
jayrhynas
Sure, but then you're either pushing the responsibility on twitter to monitor
how every API key they give out is used and label certain apps as bots, or on
the user to look up an app and determine for themselves if it's a human or a
bot.

------
nullc
Might be easier to establish such a framework of laws and norms if nation
states weren't the leaders in this area of technology for manipulating public
opinion.

So first try to pass laws to make it unlawful for your government to use tools
like these against its own citizens. Then you might have a chance to tackle it
more generally, if you can get that far against the self interest of those
with the tools and the companies that furnish them.

------
fiatjaf
There are also "prove you are a robot" schemes, like proof-of-work.

~~~
dkersten
I did a job application once where as part of the application you had to solve
a simple math problem that was displayed at a specific URL[1]. The catch was
that the math problem was different every time and expired in half a second,
so the only way to do it was to script it. I guess its a time-based proof of
work kind of thing.

[1] IIRC, the URL would give you a simple arithmetic formula to solve and you
would hit another URL with the result, something like /foo/<result> and it
would give you a code. You then included the code in the application.

~~~
fiatjaf
That's perfectly automatable, but I guess no one took the time to implement a
robot for it.

