Hacker News new | comments | show | ask | jobs | submit login
Please Prove You’re Not a Robot (nytimes.com)
40 points by Futurebot 3 days ago | hide | past | web | 49 comments | favorite





The last few paragraphs go kind of bonkers:

> The problem is a public as well as private one, and impersonation robots should be considered what the law calls “hostis humani generis”: enemies of mankind, like pirates and other outlaws.

> Improved robot detection might help us find the robot masters or potentially help national security unleash counterattacks, which can be necessary when attacks come from overseas. There may be room for deputizing private parties to hunt down bad robots.


This is the kind of rhetoric and narrative-pushing that give laypeople a skewed image of "robots". Technological optimism is blatantly alive and well in tech circles (that's a whole other problem), but it's sensationalist quotes like this that give everyone else ammo. Skepticism is important, but it's also important to recognize when something resonates you mainly because it re-affirms or rationalizes something you believe strongly.

I thought this entire article was almost comical, and posted here for laughs. There's nothing new about the weak-minded being influenced by something as simple as being exposed to 10,000 political campaign signs in their neighbors yard. And ticketmaster was already screwed up before bots made it worse, it's just a bad system that has ruined going to concerts for me.

It's helpful for more people to realize how many bots are out there, and where social media spam comes from. But I don't believe it's crashing democracy so much as it's just another tool for corrupt, power-hungry politicians wanting to further entrench the supporters already on their side.


"The weak-minded"

Sounds foolish. Even strong-willed individuals can be influenced if the level of external stimuli is high enough.

It's usually those noting that "no, not I!" who are most oblivious to the affects of such forces.


On the note of CAPTCHAs, there is a US software patent on blocking requests by presenting unsolveable CAPTCHAs[1]. Just thought that was an interesting thing.

[1] https://www.google.com/patents/US9407661


It seems that there's plenty of prior art in unreadable captchas already

I always found Google reCAPTCHA to be pretty much unsolvable. Unless you turn off JavaScript in the browser that is, then it becomes easy. Which is pretty surprising, it would seem that it should work other way around - with JavaScript you should have more data points to make reliable decision. In accordance with this patent, another possibility would be that decision is in fact more reliable but I am just associated with malicious activity ...

Read all your Philip K Dick. He became popular but he had a very paranoid way of going about exactly this as the article says with the Blade Runner example (which is a Dick story if you didn't know and the article does not mention). The problem is if you mix Bostrom with PKD you get PKD again; a robot law 'telling you are a robot' will never work because once you cross the threshold you will change your circuitry to never tell that. Or maybe even tell it in a joke 'yeah, I am a robot hahaha'.

> Or maybe even tell it in a joke 'yeah, I am a robot hahaha'.

In "Surely your joking", Richard Feynman tells how he got away with hiding the door to one of his fraternity brother's room by answering in such a fashion. Interestingly, apparently it was so convincing that no one present even remembered that he had confessed.


“Tim: did you take the door?”

“No, sir! I did not take the door!”

“Maurice. Did you take the door?”

“No, I did not take the door, sir.”

“Feynman, did you take the door?”

“Yeah, I took the door.”

“Cut it out, Feynman, this is serious! Sam! Did you take the door . . .”—it went all the way around. Everyone was shocked. There must be some real rat in the fraternity who didn't respect the fraternity word of honor!


> When science fiction writers first imagined robot invasions, the idea was that bots would become smart and powerful enough to take over the world by force, whether on their own or as directed by some evildoer. In reality something, only slightly less scary is happening. Robots are getting better, every day, at impersonating humans.

This is actually the main plot point of Asimov's 1946 short story Evidence, which was later republished in I, robot (1950). Stephen Byerley is running for office and the main protagonists are trying to figure out if he is human or a robot.

Edit: Removed spoiler


wow... that spoiler was actually a spoiler. I was thinking "I should read that" till I hit that last line.

You should still read it. It's a very rare gem in the sci-fi literature.

It's amazing in any way you can imagine. In something like 40 pages you get presented with problems from several very different domains -- and proposed solutions.


Sorry about that, I removed the spoiler to not ruin the story for anyone else.

I have been saying for a while now that our current systems are all relying on the inefficiency of an attacker.

Soon, video and audio of an event or speech be proof of anything.

The only way to prove identity will be to have a device which can do challenge-response.

Without it, you won't be able to prove you're not a robot over the internet.

https://news.ycombinator.com/item?id=14786863

Forget "hacking elections". A botnet will be able to hack our trust in one another (see CIA reputational attacks), AI will be used to chat up girls online better than any person (see fb AI sales bots), and so on.

Computers can already beat us at Chess, Go, etc. How much different is humor, honor and reputation once companies add one more breakthrough to deep learning to model them?

An attacker that can make 100,000 jokes a second each of which is excellent? The missing breakthrough is how to automate the "human judging" factor. This is the problem when figuring out diets or treatments etc. Clinical trials take a long time. Same with textbooks.

Once we figure out how to speed that part up, we are going to be able to make AI that knows what's probably going to be funny ahead of time.

PS: See what I did there? Hint... inefficiency of attacker


>Computers can already beat us at Chess, Go, etc. How much different is humor, honor and reputation once companies add one more breakthrough to deep learning to model them?

ha


I've found Google's RE-Captcha or whatever they are calling it now so frustrating lately. First off, it always seems to be about cars and roads, and it seems pretty obvious they're using this to help their self-driving car technology program. I don't know why but this rubs me the wrong way. It makes me feel like I'm being used to do free work for Google - like a digital miner of sorts.

Second, it doesn't work that well, as in it tells me that I was wrong when I should have been right, and it keeps throwing images at me, making the logging experience quite frustrating. This especially seems to happen around signs, like when you miss a part of a sign's pole or something. Although other times it seems to act the opposite way, and it only wants the middle larger sections. That uncertainty about what you're even supposed to do exactly makes for a bad experience.

The part where it continuously changes the images on you is even more annoying, because you're left there wondering "Do I have to pick many more of these?! Because I almost want to give-up trying to login."

Whatever they did to "improve it", like a year ago or maybe a little less than that, seems broken to me.


I've had the same experience. It seems like a lottery as to whether it'll decide I'm not a robot, or want me to help train their computer vision system. It wouldn't be so bad if it was just a few images and took 2 seconds, like text CAPTCHAs, but yesterday I had one that probably took a full minute to complete, as it just kept showing me more pictures of cars. It was really annoying.

Even more frustratingly, if you're on a VPN or such, it takes even longer. When at the Chaos Communication Congress (whose uplink goes over a VPN) I gave up after some 3-5 minutes of trying to solve an image classification reCAPTCHA.

> It makes me feel like I'm being used to do free work for Google - like a digital miner of sorts.

They are providing you a service--preventing spam bots from abusing services that you want to use--and are using your proof-of-work as payment. I don't see the problem.


They are providing the website I'm visiting a service, not me. Spambots aren't really my personal issue - the site either sorts it or the userbase goes elsewhere.

I don't really mind captchas aside from the new ones that Google's doing, the "Keep clicking roads until there are no roads anymore" ones, just devil's advocate.


That sounds like they're providing a service for a website, and the website is passing on the bill to you.

I don't see a meaningful difference between these phrasings.

I agree, lately they've become incredibly aggravating - frustrating, slow and tedious. I'm pretty sure it stopped being about proving that you're a real person some time ago too.

Yes. I suppose if you have ad-blocking/tracking-blocking you get to do more work as well (Google gives me some 2 panes to identify before allowing me through)

And yes, it's frustrating. You never know if that tiny corner of a triangle sign counts as "a sign" or not


I wonder if you are somehow far outside the norm in a way that is suspicious to Google, e.g. do you use Tor or VPNs or similar?

I'm mostly asking rhetorically, because I haven't needed to do more than click the checkbox saying I am not a robot in months.


I get cars, road signs, storefronts (with signs, often in other languages), and from time to time, mountains. Not sure how the last two fit into the self-driving car hypothesis, unless the program is being extended to other types of self-driving vehicles, such as delivery drones.

I'm being used to do free work for Google

Does Google do [edit - adding quotes:] "free" work for you? Not sure how anyone dependent on Google's services (not you specifically necessarily, but most in general) such as search or mail could justify this perspective.


That's a non sequitur. If you only had to fill captchas to access google services you'd have a point, but plenty of 3rd party websites completely unrelated to Google use them nowadays.

I also much preferred the older text-based captchas (which google also "abused" to OCR text IIRC), I found them easier and quicker to solve and I didn't have to reach for the mouse. Fortunately 4chan at least lets you switch back to "legacy captcha".


If you only had to fill captchas to access google services you'd have a point

No, in that case Google is providing the service to plenty of 3rd party websites completely unrelated to Google, who basically punt and wind up saying "do this work for Google for access".


Right, I'm not saying whether that's a good or a bad thing, I'm just pointing out that it doesn't make sense to say that it's fine to do "free work" for Google because they also provide other services for free. Those two things are mostly unrelated.

If you use gmail but never solve captchas are you freeloading? What if I use adblock but I solve a lot of captchas, am I good? If I pay for Google's business "G suite" do I no longer have to solve captchas? If I don't use gmail or google search can I opt out of captchas?


say that it's fine to do "free work" for Google because they also provide other services for free

That's not what I meant; my poorly communicated point was more about how nothing is really "free" (should have included the quotes earlier!).


Well in exchange for my dependence on their services they show me ads, and make rather a lot of money doing it.

in exchange for my dependence on their services they show me ads

... plus ask you to solve captchas sometimes, right? Is there some line or reason where ads are ok but captchas aren't?

Edit: With captchas the site benefits directly from Google, but ads function similarly.


Well, you can ignore ads, or not click on them, and get on with your work. You can't do that with captchas unless you're willing to be locked out of the page/site.

A less charitable interpretation: letting non-ad-blocking others provide enough value / pay the bills makes ads ok but captchas are not ok because they level the playing field / are required.

In theory, no ad revenue will eventually lock out the site too.


What frustrates me is the lack of a session tracker. On one login form I may have to sit through the captcha multiple times if Im guessing a password, for example.

Unless.. its deliberate, and Im just really good at identifying road signs?


"A simple legal remedy would be a “ Blade Runner” law that makes it illegal to deploy any program that hides its real identity to pose as a human. Automated processes should be required to state, “I am a robot.” When dealing with a fake human, it would be nice to know."

what a genius idea. there's no way at all that could be used against people except for spammers


And where do you draw the line? If you schedule a tweet to be sent in the future, is that a "bot"? And how to you make sure APIs aren't being used by bots - do you force users of 3rd party apps to enter a 1st party captcha to refresh their session every day?

Twitter already identifies what app was used to post a tweet, if you're looking through the API not their app. It would be trivial to display that information and distinguish between apps sent by humans and by automated programs.

Sure, but then you're either pushing the responsibility on twitter to monitor how every API key they give out is used and label certain apps as bots, or on the user to look up an app and determine for themselves if it's a human or a bot.

Indeed. "If you outlaw guns, only outlaws will have guns" seems to apply here too. If anything, it will make it easier to impersonate a human, because people will assume that if it doesn't self-identify as a bot, it must be a human.

But what if the blade runners are also bots, and mistakenly ban humans? Who would you blame?

Silly idea altogether. The robots.txt is as close to this as we should go.


How would you even enforce that though.

Compare to the no-spam rules. They don't stop people from spamming, but since that thing is illegal, it can be investigated by the police and people/companies found breaking the rules can be punished.

Might be easier to establish such a framework of laws and norms if nation states weren't the leaders in this area of technology for manipulating public opinion.

So first try to pass laws to make it unlawful for your government to use tools like these against its own citizens. Then you might have a chance to tackle it more generally, if you can get that far against the self interest of those with the tools and the companies that furnish them.


There are also "prove you are a robot" schemes, like proof-of-work.

I did a job application once where as part of the application you had to solve a simple math problem that was displayed at a specific URL[1]. The catch was that the math problem was different every time and expired in half a second, so the only way to do it was to script it. I guess its a time-based proof of work kind of thing.

[1] IIRC, the URL would give you a simple arithmetic formula to solve and you would hit another URL with the result, something like /foo/<result> and it would give you a code. You then included the code in the application.


That's perfectly automatable, but I guess no one took the time to implement a robot for it.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: