Many years ago (back when machines weren't so good at image recognition, and we were still better at something) I made "humans.txt": solve simple arithmetic expressions to ensure your services are being consumed by your intended audience - and not bandwidth-wasting humans.
dbrand.com should do that for an extra nerdy touch (they call themselves robots and sometimes put drawings as notes in random orders, signed by "Robot" and an ID number)
My first thought was that maybe this was some sort of anti-captcha where the images were adversarial examples that a neural network would classify as a shopfront?
However from the comments here it seems to be less involved than that to get past the challenge, does anyone else know what the actual test is?
I even decided to close the browser tab when I see any of the following:
* Full screen "Join our email list".
* Full screen "Subscribe now".
* Annoying ReCapthas.
* The loading takes more than 10 seconds (it's unbelievable how often this happens).
* Login to see our content you just found on [search engine].
You could argue I miss out by doing so, but this is not how I experience it at all. I just don't want to waste time one such crap. I'm voting with my visit so to speak.
I like it but I couldn’t help but spot the irony that tumblr is a bad web citizen that breaks the back button so the easiest way to leave the site is to close the tab...
When I put it there it wasn't as bad, but the irony builds up when you add in that I wrote the article about why I'd made TC;DR on Medium, who over the following six months filled every page with this rubbish.
That's exactly what it is. If you recall, these image based captchas were originally in 2007 for digitizing books. [1]
In 2012, Google started using captchas to identify house numbers from google street view. [2]
Now, users are identifying cars, bikes, traffic lights, and crosswalks most of the time. While Google/Alphabet has been mum on what specifically they're using the data for, it is speculated by engineers at competing firms that they are using this data to help Alphabet's subsidiary, Waymo, with its self-driving car program. [3] This data is either used as training data or to validate outputs that were already classified by their system.
I have serious, serious doubts about the whole approach of "identify known objects in a camera feed so you can try not to hit them". I don't know what the current approaches are but there needs to be some kind of subsumption setup where if you don't recognize an object, at the very least you assume it's a stationary solid object and you don't hit it. It doesn't matter if your classifier says it's a brick, or a grandma, or a hibiscus, or (unknown). Unless you've positively identified it as something that's safe to hit (say, a plastic bag wafting along in the breeze) then don't hit it.
If you do have an identification then you can layer behaviours on top of this (eg. Is it a person? They usually walk forward or in the direction they're looking, so anticipate this.) But the default behaviour cannot be "dunno what that is so I'll ignore it".
"Don't hit the thing" is the most basic, fundamental behaviour for a self driving vehicle.
And this is why I like to call them "Self-Crashing Cars".
I'm fully capable of ploughing in to moving and stationary objects at high speed, under acceleration, myself. Thank you very much.
And OTA updates that can change the behaviour of the vehicle between uses? Just no. I have enough trouble switching between my Japanese and European cars (one of each) where the indicator stalks are on opposite sides of the steering column. I'm forever indicating my intention to turn with the wipers!
> Hundreds of millions of CAPTCHAs are solved by people every day. reCAPTCHA makes positive use of this human effort by channeling the time spent solving CAPTCHAs into annotating images and building machine learning datasets. This in turn helps improve maps and solve hard AI problems.
Thnx. When people talk about captchas I usually reply something about how google is using them for machine learning, and suprisingly often people shrug it off as a conspiracy-theory.
Why have I never just done what you did, linked to google stating that this is what it is in plain text....
I'm not replying to myself but to all who posted to confirm my suspicion. I'm amazed that I've been so blind to it. It's so blatant. And now it makes sense how many of the challenges are: traffic lights, school buses, etc.
I failed the first time when it asked for traffic lights.
Then it asked to click on all computers and I just picked all the greyish squares since all the others were seemed like shots of "natural" things. Got in then.
That would be nice but IIRC captchas actually use your cookies to decide if you are a human. Maybe incognito or a headless browser would give you initial access here, and then you could copy whatever access token they use from your cookies and add it to your application storage to access on your normal browser (unless they consistently check your cookies)
Any recommended resources to learn about how modern captchas work, at least what we known. I've watched a few videos about some mysterious and eerie mass spam campaigns recently on YouTube and there's been mentions of software that can automate mass spam. I'm curious how difficult it is to create such monstrosities.
One of the easiest way to bypass captcha is to use a service like 2captcha or deathbycaptcha, you pay roughly 1 usd for 1000 captcha and some actual humans sit and solve captchas for you.
Heh, I should have figured it would be something as simple. I guess I'm always expecting some cool, complex algorithm devised by hackers to be the core of these things.
What's interesting is that, humans get to control computers, but computers don't get to control humans. At least computers are not originating thoughts on controlling humans yet. So technically we could get in by asking a computer to do it, but not the other way around yet.
Any time you classify traffic lights for a captcha you are doing just that, you are being asked to do something by a robot because it is not so confident about their own results. We are just starting to be the cheap labor of robots.
At the end there is always a human. But those images were selected by robots, no human was involved. It was the robot's 'decision'. At least that is my point of view.
Amazon is trying hard to replace its foremen / women with computers. Maybe AWS will develop a nifty web service that allows all of us to shift people management to software.
On the one hand: I agree. Even if I didn't know about nonbinary people, the common "inclusive" constructions like he/she, he/him, -men/-women, and alternating binary are like speed bumps in the flow. They're an ugly hack to avoid using the ancient and already common neutral terms like person or they and its friends them and their. If a person uses these constructs, they admit they don't see "man" as neutral, so that tired old defense doesn't work. They can handle singular they even for known persons. They never protest singular you!
"But it's for people of unknown gender!" the protest goes. Well, welcome to the queer experience; they probably don't know either.
On the other hand: this was not a good way to promote better phrasing. Your passive voice here--"it's preferable"--suggests it's a Well Known Fact, but it really isn't. Most people have, at best, a vague awareness of the existence of more than two genders, or they're so used to using these awful constructs that they don't realize how obnoxious they are. Acting like your Rare Knowledge is Common Knowledge only works when preaching to the choir.
I have often wondered if masking personal images you want up on the internet but don't necessarily want tracked or liked back to you via facial recognition could be done using a neural net mask of a different object.
The URLs of the images seem to be a combination of a MD5 hash and an ID (changing the ID will produce a different image). I guess the point is that only machines can reverse MD5 to get the actual "image name"?
There is no reversing of an md5 hash. You can try to cause a hash collision, or brute force compute it, but you can't turn something like 40 bytes of data into 100 for example.
but in the case of hashes -> URL there is a fairly reasonable rule set of what constitutes a plausible reversal. Therefore generated collisions could be reality checked, unlike other things (like a md5 of an encrypted file)
A one way function cannot be reversed by definition. I obviously meant finding a set of possible strings that produce that hash and one of them will likely be the image name. "Reverse" wasn't the perfectly accuarate word to use but sometimes a bit of intuition goes a long way.
Hashes are inherently lossy. Although a rainbow table can maybe tell you one possible input for a given hash, it cannot tell you exactly what was hashed.
You can't reverse most hashes, you can just check if one thing's hash is the same as another thing's hash. If they are, they're probably the same thing.
Theoretically, no, but in practice if you know that "password123" hashes to "blaHb1ah" then you get a DB of hashed passwords and see "blaHb1ah", you probably know that person's password is "password123". (which is why you use salts to fix that). For all intents and purposes I just reversed the hash in this context.
> So you can assume (probably with good certainty) that you've got the correct password, but you can't be sure.
That's assuming no other constraints.
If the constraints on the password are strong enough (for example, must include letters, numbers, special characters, and be less than 30 characters) that there really may be only one input that satisfies those constraints and also hashes to the found value.
True. However, in most circumstances, finding a input that hashes to that value is equivalent to reversing the hash. For example, if your password is "password", and that hashes to "blahblah", and I find that "foobar" also hashes to "blahblah", then I can log into your account with password "foobar" even though that wasn't your original password.
> However, in most circumstances, finding a input that hashes to that value is equivalent to reversing the hash.
In most circumstances, is guessing your smartphone's pin code is equivalent to hacking the security of your phone?
In both examples, the end result is the same, but the process is absolutely not. Reversing a hash is a general solution that can be reapplied in all circumstances, much like finding a security exploit to bypass the lockscreen of your phone.
Looking up a hash in a rainbow table that matches is much like guessing your PIN. It works for that one specific case. It is not a generalised solution across the board.
I'd also argue that a hash (SHA, MD5, etc) is also reversible IFF the bit length does not exceed the bitlength of the hash.
It's how many a password db is cracked. A hash may have infinite unhashed representations, but if the maxlength (in bits) is less than the hash type, then rainbow tables can relatively easily handle it.
Not really, since the hash length is the same. You can end up with infinitely many solutions to a hash, there's no guarantee that the smaller one is the correct one.
My understanding of persisted XSS attacks is that it's not that the site is malicious, but that it had security holes, so other people who got through the captcha uploaded malicious scripts. Now the site is serving them unawares. Does that sound right?
Yup! See my other post. I was asked to pick computers and I figured they'd all be in the greyish boxes and not the colorful ones. Turns out it was a good assumption.
I just got it by chance, there seems to be a XSS vulnerability and some way to post things. Didn't expect so many alert windows to appear and not sure what else it was doing.
I parsed the comment as 1) "this is not an original idea, here is the genesis" and 2) "look at this cool video." The parent comment was addressing 1 and you are addressing 2.
https://www.mrspeaker.net/2010/07/15/humans-txt/