
How to Bypass “Slider Captcha” - jsnell
https://medium.com/@filipvitas/how-to-bypass-slider-captcha-with-js-and-puppeteer-cd5e28105e3c
======
philo23
Looking at the source code for at least the first example [0] it looks like a
simple $('form').submit() would work just as well. It's all the library is
doing internally (after some HTML5 form validation checks)

I suspect most slider captcha's out there are doing something similar or at
the very most setting a hidden field to "true" or "1" etc. Very few are
probably fingerprinting your mouse/touch movements to actually validate you're
a real human being.

[0] [https://github.com/kthornbloom/slide-to-
submit/blob/master/j...](https://github.com/kthornbloom/slide-to-
submit/blob/master/js/slide-to-submit.js)

~~~
joosters
I was wondering this. Surely a captcha needs to have some form of 'hidden
knowledge' that is validated only by sending something to the server for it to
check - like the jigsaw puzzle captcha at the end of the article, or a 'pick
the images containing a bus' kind of thing. With the slider, there doesn't
appear to be anything stopping a bot client from just submitting a form and
pretending that it interacted with some HTML and javascript.

~~~
ivanhoe
It's just not popular enough to be handled by bot scripts (yet)... and most of
spam bots are not targeting one particular site so this worked for a while,
just like math captcha and drag-an-image worked for a while. The moment they
get popular enough for bot authors to tackle the issue, they'll stop working.

------
pflenker
The way I see it, Captchas are an approach to fend off _untargeted_ spam bots.
If someone specifically targets your website and tailors his spam bot
accordingly, then you have a problem, but that is not inherent to Slider
Captchas.

On the other hand, a Slider is much more user friendly than a scrambled image
or having to select all pictures with light bulbs.

So I think this article misses the point a bit...

~~~
eastendguy
Exactly. To avoid contact form spam I added a simple "Fill in this number
(123):" to on of our forms. The number is even hardcoded! But that was enough
to reduce contact form spam from ~10-20 a day to zero.

~~~
TazeTSchnitzel
I am one of the admins on a forum for an open-source video game. We have two
extremely easy questions required on signup (one asks for part of the domain
name of the site, the other is a trivially googleable game fact) purely to
stop untargeted spam attacks and it is remarkably effective. Of course some
spammers get through, but that's why we have moderators.

------
phrz
This slider doesn't appear to improve upon reCaptcha's poor accessibility
story, as there's no reference to what a blind user or a user without a mouse
is supposed to do to interact with this element.

~~~
tempodox
Why would it? None of these demographics would be able to interact with online
advertising either, and that's what today's web is all about.

~~~
X6S1x6Okd1st
There's def ads for blind people.

~~~
adossi
Wouldn't deaf ads be for deaf people?

~~~
lucasmullens
Not sure if you're making a joke but I definitely misread def (definitely) as
deaf

------
eastendguy
All these sliders can be automated with the UI.Vision browser extension and
its _visual_ XMove command:
[https://ui.vision/docs/xclick#xmove](https://ui.vision/docs/xclick#xmove)

The target audience are web testers. The advantage of this approach is that it
requires _no_ Javascript knowledge, as you automate the task visually. It is
similar to the well-known Sikuli tool, but works inside the browser.

The "drawback" of this image-recognition driven approach is that it is much
too slow for spamming... which is actually another advantage from my point of
view.

------
archy_
It's disappointing that so many websites drop trust and human moderation in
favor of half-baked solutions that disregard the disabled. I've found forms
where I could remove the disabled attribute from a submit button and bypass
captchas entirely, I have no faith in them any more.

~~~
JeremyBanks
Human moderation is not a practical suggestion when literally 99% of
submissions would be bots.

~~~
pavel_lishin
Or for small websites run by a single person.

------
bin0
As mentioned by other comments, we will likely end up in an arms race of
adversarial neural networks. Which will be opposed by neural nets driven by
manual classification of spam/not spam entries. However, I think the spammers
have a decisive advantage. What you are dealing with here is an asymmetric
situation: while sites must spend money on people to do classification, the
spammers have an oracle against which their neural nets can verify responses.
This means improving their bots to fight new anti-spam tech is easier. I can't
say what the solution is here.

~~~
jsnell
The asymmetries cut both ways. The site can detect probing attempts, and if
that happens switch into a mode where the captcha results are obfuscated.
Let's use blog comments as an example:

The normal mode of operation is to give the user a clear error message if they
failed the captcha, and have the post go through if they passed the captcha.

If the number of failures is higher than e.g. 10 in the last hour, successful
captchas cause the comment to be put in a hold queue for an hour. Failed
captchas cause the comment to be rejected. The attacker can't know if they
passed the captcha until an hour has passed, which slows down their iteration
a lot.

You can then slow it down further with some randomization, while keeping the
experience of real users the same. E.g. successful captchas go to the hold
queue. Failed captchas are rejected 99% of time time. The remaining 1% of the
time they go to the hold queue, but are auto-deleted after a random delay of
2h-12h.

So you accept a small amount of temporarily visible spam as the price of
obfuscating the signals.

~~~
bin0
> The attacker can't know if they passed the captcha until an hour has passed.

Naah. The attacker just does batch learning: dump a ton of comments, wait an
hour, spend a few minutes training, rinse-and-repeat. Your users also won't
tolerate tons of delays.

> You can then slow it down further with some randomization.

So now I wait a few extra hours, and have 99% accurate data. That's still
pretty good, honestly. Ideally, a neural network can get a good model from
less accurate data than 99%; just tweak the learning rate or use sgd, modify
mini-batch size, etc.

The advantage of the site, of course, is that legitimate businesses are likely
more able to afford people with this sort of knowledge. The scammers are
probably script kiddies who clone a template and tweak a config; they'd have
real jobs if they knew more.

------
rihegher
I like the idea of a puzzle slider captcha. It is funny enough and quite
different from usual captcha out there

~~~
vortico
Solving a puzzle would be much more fun than identifying street lights and
then waiting 20 seconds for Google's RECAPTCHA to "check" the result.
Unfortunately both of these things can be easily broken with bots. The data
Google tracks/checks is what's difficult for bots.

~~~
danieldk
_Solving a puzzle would be much more fun than identifying street lights_

But that wouldn't give Google free data annotations.

~~~
SiempreViernes
It would be nice if we could get back to those good old days when doing a
reCaptcha helped digitize books.

~~~
djmips
How did that work if the purpose of answering a captcha is matching with a
known answer. If you were helping solve then it would imply you would get a
few that didn't pass you through the gate even if you answered correctly.

~~~
thepangolino
Those used to have two words. One known and one not. To purposely fuck the
system you’d have to guess which is the unknown word.

------
suvelx
There are already companies out there doing 'biometric' analysis of user
sessions to discern between authentic, fraudulent and automated sessions, and
they're already being applied to things such as loss prevention in financial
firms.

I had always assumed that this sort of analysis was already done on the
'slider captchas'. It wouldn't surprise me if this becomes a thing.

Humans are almost never going hit the exact centre of the box, and unless the
browser does some smoothing I suspect they never swipe smoothly and
horizontally.

~~~
Anarch157a
Then bot makers will start applying fuzzing to the movements, randomly placing
the cursor inside a region over the target, varying the speed, releasing the
slider and trying again, etc.

There's no reason _yet_ to go to such lenghts, so the extra effort would be
wasted, but as soon as it becomes necessary, someone will do it.

~~~
adrianN
The endgame is probably training some neural net with real user behavior and
using that to generate realistic usage patterns.

~~~
wolco
Then the spammers using that data to create more bots.

------
bflesch
That's a great writeup. I wonder if there is some further server-side
validation going on. It doesn't seem like the author actually submitted the
forms - there could be further checks.

------
silasdavis
Can anyone recommend a minimally invasive, accessible capture mechanism that
would perhaps still cut out X% of non-targeted bot spam for some possibly
small but nonzero X?

I do not want to expose users to recaptcha 3 TOS or any other opt in
monitoring.

I was considering some form of completely unanalysed input such as non
standard element based checkbox as a low pass filter.

~~~
yorick
You may want to consider running some proof-of-work in the browser, that would
be costly to run for bots.

~~~
ForHackernews
All that does is further penalize the victims of botnets by driving up their
electricity bills. It won't hurt the spammers who run the botnets.

~~~
taffer
This is only true if we assume that spammers are not already saturating the
resources of their victims. And even then, it's better than penalizing blind
or deaf people.

