Hacker News new | past | comments | ask | show | jobs | submit login
Artist matches influencer photos with surveillance footage (smithsonianmag.com)
205 points by pseudolus on Sept 24, 2022 | hide | past | favorite | 82 comments



Discussed extensively 12 days ago:

https://news.ycombinator.com/item?id=32808808


I was wondering... given that this was indeed discussed extensively before (the original write up shared) is this Smithsonian post even supposed to be shared? I would have guessed it would simply be ignored.


I think a different article doesn't count as a dupe, as they often have new information.


If it's the same basic story, we treat it as a dupe. We might not mark it explicitly with the [dupe] marker though.


Dupes have more to do with how close they are to each other. The same article posted months apart isn't a dupe, different articles on the same subject posted hours apart is a dupe.


It's not automatic, but sometimes the mods decide manually to mark it as a dupe when the new one has no additional information and the other had a big discussion.


> while others have criticized what they see as a flippant use of potentially harmful technology: showing how easy it is to access livestream footage and facial-recognition software. Many of these critics encouraged the artist never to make the A.I. he developed public.

Security through obscurity? If this artist can create this AI on his own, then many programmers can create it, even if the artist doesn’t make his code public. I feel like many non programmers don’t understand how easy some of this stuff is to replicate once they’ve seen some else build something.


Something to remember is that "critics" generally just means "random people replying on twitter sampled by the article writing to create a narrative".

It's not even a good sampling of what "many non-programmers" think - especially since the single linked critic claims to be a Software Engineer.


Was there really the need for AI here? Geo locate the photo shoot, timestamp it and search for a camera that matches? And then it looks like some simple object classifier boxes from open cv to jazz up the images?

I think the bigger issue, one that is valuable is the discussion about privacy in public space. Or a better way to put it, you have no privacy at all in public spaces.


Yeah, there's no need for "AI" software on this project. From the images, it looks like a straight out-of-the-box use of YOLO v7 or something to make it seem more fancy, but this could be easily be done manually without any machine learning at all (or code for that matter).


Hmm, yea privacy in public is an interesting thought. I don’t think its ridiculous to say you should have _some_ privacy, even in a public space. It feels like the expectation is that you can be physically seen, but beyond that I don’t see how someone loses all facets of privacy, unless you’re assuming a surveillance-state.


For the longest time in urban public spaces, there was a sort of privacy by means of the anonymity processed. You knew that you would never see any of the people around you. That's definitely completely gone now.


This adds less than you'd think since people are already tracked by their cellular location. Some off-shoot use cases for this are determining where someone travels and when along with who they interact with by looking at adjacent cellular signals.

You can quickly solve where they would be expected to go in their daily life, what route they'll take to get there, who they will interact with along the way, and when they will be doing it.

Combine this with their credit card purchase history and you can also work out what they're going to buy and how long it will take them to eat it.


I (along with many others who are privacy conscious or are just tech luddites) do not carry my cell phone all of the time.


Albeit noisier, utilizing facial recognition for the same purpose has been in use for a number of years now. They're both identification mechanisms but tracking IMSIs and Mac Addresses gives both better recall and precision than a network of cameras.


You do still legally have an expectation of privacy if you’re having a conversation and at least trying to ensure it’s somewhat confidential.


You don't know when the image was taken, you need to go through the footage until you find the influencer.

May be this wouldn't be that hard to do manually, but making it with an AI shows that it can be don at scale; whether the author here did it or fake I don't know, but the message wouldn't be the same without the AI part.


How do you geolocate a photo shoot and search for a camera that matches? I really don't know what you meant. Assuming you're not talking about phone metadata, of course.


People geotag their own Instagram posts, you can just search Instagram for photos in Times Square and get loads.

If you only want to do this for a few photos, you just need to (eg) record the Times Square webcams for a couple weeks, then find a couple of recently tagged Times Square photos, make a good guess as to time-of-day and exact photo spot and scrub through your recorded footage from those times until you see someone in those clothes with that pose. It'll take a little while, but with this project you only need to do it a few times.

Keep in mind that the person doing this has all the context from the Instagram account's other photos and captions. Perhaps the same Instagram account made it really obvious what day this trip was happening, which would narrow it down enormously. We don't have that context, but the artist putting this project together has a free choice of which photos to investigate, and in Times Square or in front of Wrigley Field you can surely find some Instagram posts where it's obvious from the surrounding context what day and time it was taken, EXIF data aside.

And that's assuming the artist didn't just hire these people to show up at Times Square at a convenient time for him to press "record" on the webcam.


It's phone geo data.

Not sure if Instagram makes the exif data available


No, exif data is wiped from photos by any of the large social media companies and public photo hosts and has been like that for years.

Edited to add comma.


I think you’re missing a critical comma?

”No, exif data…”


I think you'd be right!


Instagram strips it but people themselves add the location on their own, that you can maybe access through api or at least minimum by scraping.


Otherwise sites like this wouldn't exist: https://iknowwhereyourcatlives.com/

(taken from yesterday's posting of OPsec tools: https://news.ycombinator.com/item?id=32951406)


Last I heard, Instagram strips EXIF data on upload.


> you have no privacy at all in public spaces.

Have we ever?


To varying degrees, depending on your definition of privacy.

For example, it used to be possible to go to a public park with a < 5% chance of being in the background of some random photograph taken by a stranger (or 0% depending on how far we go back in history).

Now in many places you can barely walk 1 block without being in the background of someone's iPhone selfie or captured on someone's Ring camera.


Nope. Only through obscurity - it used to be difficult/expensive to track somebody’s daily movements. Now it’s relatively trivial to do so.


This is where I have trouble. It obviously feels wrong to do at scale in an automated way but the only difference is that it got easier.


Yeah. I feel the same way. And I realize I facilitate it with my phone and other devices. But still, feels creepy at best and dangerous at worst.


Exactly. This is not technically remarkable given what's currently possible. I would actually be surprised is some variation of this isn't already used by law enforcement or security companies. This looks like something a college student might be able to make with something like OpenCV (scaling aside).


> If this artist can create this AI on his own, then many programmers can create it

Yes, the software can't be controlled, the knowledge and skill needed to make it is widespread. But what about the hardware? Production of suitable hardware is very centralized. As more people get burned and shaken by these new systems, I fear we may eventually reach a critical mass where most of the population is eager to restrict availability of new hardware to 'trusted' organizations; corporations and governments.


I don't think that hardware control is the solution here as there are no definitive bounds that can limit what is possible since the answer is often just "let it execute in more time", or "run it distributed over a network of other devices".


yeah - colleague here is outsourcing this kind of engineering for security Fortune 500 clients right now, and has been for 2+ years. There is big business going on right now building and shipping exactly this, as high end security camera installs, as far as I can tell. Its not new at all


> Many of these critics encouraged the artist never to make the A.I. he developed public.

We'll have to wait until OpenAI ethically releases an alternative to "geolocate criminals", as usual. :-)


this is just a time based version of photosynth


The folks begging the artist not to release their tools are sort of approaching this from the wrong direction. The problem isn't the tools, which given the information in the article can now be reasonably replicated. The problem is access to the input data.

They should be clamouring for laws that prevent public recording, or from those recordings being made available to others.


It's not clear that there are any tools. A dedicated person could do it by hand for the six photos presented. And that's assuming they don't have any other context making the time/date location obvious (maybe he picked Instagram posts where the caption said "Look where I was at 3pm yesterday!"- some of those are probably lies, but you just have to find one that isn't)

These days, there are people who are extremely good at locating much harder photos in time and space. This one is relatively simple:

https://craighays.com/how-an-investigator-can-find-your-loca...


This was almost certainly a manual process by the artist. The state of the art of commonly available surveillance AI products would not add much in terms of making this process faster or more accurate. (source: I work in surveillance AI).

IMO this is really more a of a proof of concept or statement on where things COULD go if currently technology trends progress for another decade or so.


Yeah, it's a kind of sleight-of-hand or a magic trick- who would spend hours scrubbing the Times Square cameras to match up with Instagram posts? An artist making something like this would.

Magic tricks often operate in the same way, you have some meticulous preparation that can produce a specific result, one time, under specific conditions. If you obfuscate the preparation and people don't realize how specific the conditions are, then it looks like something more profound happened than really did.


From the article:

> Then he enlisted an artificial intelligence (A.I.) bot, which scraped public Instagram photos taken in those locations, and facial-recognition software, which paired the Instagram images with the real-time surveillance footage.


Well, that's what he says he did.

Why would he need an "AI which scraped public Instagram photos"? You can just ask Instagram for photos tagged "Times Square" and get loads of them. Totally unnecessary to teach an AI what Times Square looks like when there are a bazillion tagged photos of Times Square being posted every hour. Maybe you need a lot of them, but "bulk scraping lots of Instagram posts" isn't exactly AI.

https://www.instagram.com/explore/tags/timessquare/


In order to identify specific individuals in those photos.


I used to be more disturbed by the idea of young people aspiring to be influencers, and vacuous nature of such a “career”. But then I realized it’s not much different than any celebrity using that status to make $ on sponsorships, especially those completely unrelated to the roots of their celebrity status.

I guess that doesn’t exactly make the activity better, but it at least no longer feels like a further degeneration perpetrated by modern social networks. And it is a sort of democratization of the lucrative paid shill activities concentrated among the very few.


Back in the day celebrities were relatively few and you could ignore them while most "common" people were genuine.

Now that everyone is (or aspires to be) a "celebrity", you can trust no one. Sure, social media was never completely genuine but the fakeness was generally limited to bolstering someone's social status, not promote garbage from for-profit companies.


I think at least in this context it’s that you can’t trust random folks recommendations on the internet, or the appearance/lifestyle they present. In everyday irl encounters not too much has changed (again, in the context of paid influencer activity which is mostly relegated to the media— specifically digital social media.)

And when it comes to trying to find real opinions about potential purchases etc online, I don’t really use reviews anymore (and never looked for it in influencer world to begin with). These days if I want real opinions on something I search for product $X site:reddit.com and see what comes up. Even then I figure there’s some astroturfing but it’s still more reliable than, say, Amazon reviews.


The problem is people taking advantage of those wanting to be celebrities-which is a problem that has existed for decades.


Celebrities typically did _something else_, like pro sports, movies, Olympics, walking on the moon… but now we know that is unnecessary. Progress? Efficiency? IDK


The problem is they did something else, quite frequently unrelated to their fame. They mostly have no more claim to being an informed advocate for what they’re paid to sponsor than modern influencers. It doesn’t matter if they got an Olympic medal or something, they’re not in any expert position to recommend a specific brand of toothpaste or anything unrelated to their sport. In that respect they’re no different than an influencer. Maybe they’re a little worse? They’re using the actual legitimacy of their athletic achievement to lend a veneer of legitimacy to their completely unrelated (paid) opinion. Subverting what minor trust people might place in them as an authority in their area. At least with an influencer there’s not that sort of reputational arbitrage.


so...you're saying that people that are famous for being famous are an improvement? interesting... it does seem meta-efficient


An improvement? I'm not sure, but I'd like to think through it.....

In the traditional case there are sponsorships where pre-existing fame unrelated to the target paid promotion is used to lend the target a false veneer of legitimacy through that fame. Not so great.

In the "Influencer" case a person has sort of deliberately & explicitly set themselves up as a broker-dealer [1] of attention they have aggregated from the masses of the internet population. Their endorsement of a product/service is no more valid than those with traditional pre-existing fame. So, also not so great.

And when I reference the democratization of this sort of paid shilling I'm thinking about it from the perspective of those doing the shilling: There are more opportunities to everyday people to get in on that by hustling to become an influencer. But there's another population to consider: those who consume the content & paid shilling.

The average consumer probably understands that the athlete/actor/whatever that is, for example, shilling a Citi Bank credit card has no special expertise in banking, comparative appraisals of different credit card offerings, etc. They know it's just a paid gig for the celebrity so the bank can forge a pleasant psychological association for consumers.

On the other hand, influencers often represent themselves & their paid endorsements as if they were authentic/organic opinions on whatever product or service they're being paid to shill.

In the later case it seems like there's a lot more potential for people to be deceived & not realize that the influencer's opinions were sold to the highest bidder and aren't authentic representations of their true thoughts on the product/service.

In that respect, no, not an improvement at all. Quite the opposite.

[1] https://en.wikipedia.org/wiki/Broker-dealer


This feels less like an AI that can match surveillance footage to social media posts and more like a dude who matched EarthCam footage with posts made at the same time (probably just found posts with #locationTheCameraIsAt) and then added some green boxes on top to make it look high tech.

Especially given that he has 4 examples of it working in 2 specific locations. Very reasonable to put those together by hand in a few days.


Which is why it's more of an art project than anything. Which is fine, but it's basically a magic trick.


Unable to find a link, but at one point there was a Russian social media app that would let you take a photo of a random person and it would try to match it to a known social media profile.

Someone used the feature to take pictures of people in public looking especially depressed then when possible matched it to photos they had shared on social media of them looking overly happy.

Believe they had 5-10 examples in the story that covered it.


> pictures of people in public looking especially depressed

Except that you have to have that 'grave concern' face in Russia (and most of E Europe) since its a cultural paradigm. If you don't appear like you are going to a funeral at any given time point while you are in a public place, you could be taken as an unserious, unreliable person. Oppositely, you have to giggle and scream like a 5 year old when you come across a friend in a public place.


The app you are referring to is called FindFace

https://en.wikipedia.org/wiki/FindFace


Yes, thanks, recall FindFace’s logo. Still wasn’t able to find series of sad faces in public juxtaposed with super happy online photos, but this pic gives a sense of how creepy it is:

https://twitter.com/mikko/status/1014127790134882304

Sure there are number of companies scanning social media pics for face recognition, but another well know one is ClearView; one of their investors was bragging they had the app and would scan people to demo it:

https://www.clearview.ai/



Thanks, not it, but super close; there number of examples of juxtapositions, but original article was specifically calling out the juxtaposition between real life and online.

Also, while not directly related to original post, these provide clear examples of how invasive surveillance and apps with features like this have the potential to be.


> The project was originally up on YouTube, but EarthCam filed a copyright claim, and the piece has since been taken down.

They just need Google to close their account suddenly and irreversibly and they can get the spooky dystopian techno-panopticon trifecta.


https://twitter.com/driesdepoorter/status/156928587808990823... Original thread by the creator which I submitted over a week ago but got no interest at the time.


Nothing in the post or the videos indicate any facial recognition or any Machine Learning application. Why would he put the simple object detection overlays in the video at all as if that were relevant for anything other than creating the illusion of AI. I imagine he looked up some popular spots with public camera feeds, found posts with those coordinates and then manually matched the video from the image timestamp. The most impressive thing here is the artist realizing the idea of this would be provocative and garner attention.


I had an idea for a modern art installation that would look like a big box you could put in a public park or square. Then, cameras around the box take pictures of passersby and do face recognition against mugshots. When a criminal is found a spotlight on the box illuminates the passerby and a speaker announces their crime. I figure if you felt bad for criminals (or were lazy) you could also just make it illuminate and accuse random people.

I will never actually do this - but it seems like it is modern art that would be attention grabbing, getting headlines, and make some kind of point.


Serious men will just burn the box the next night.

Now if you just make it shout twitter handles and/or facebook profile names that would be hilarious.


You could make the box out of concrete and metal. Or, you could just accept that people will burn / destroy it in one night. It's an art installation - the point is provocation and reaction. If people destroy it? Well, I think that would be a satisfying conclusion.


Is it surprising to find publicly accessible video of yourself on the Internet after you walked through (or spent 30 minutes photoshooting in) one of the most popular tourist destinations in the world? That's all this "controversy" or "criticism" in the article sounds like to me, is people becoming aware of the existence of earthcam.com.


Can we please stop using "controversial" in place of "important"


"Important" means something important that I like. "Controversial" means something important that I don't like.

Writing "controversial" makes the reader think something is bad while helping the author avoid the chore of giving any actual reasons for their opinion.



Yes, I speak English. The point is not how "controversy" should be used, it's about how "controversy" is commonly abused for rhetorical purposes.


Why ask permission to change society?

What good does that do, appealing to indirect action?

Is it a polite way to both state needs and expectations, plus a way to indirectly change, plus a way to open a topic for discussion?


Precisely your 3rd sentence is how I would interpret the comment.

“Can we please X?” succinctly conveys a lot of meaning.


Yes


You don't need AI when you have GPS location, the time the picture was taken and recordings of the surveillance cameras in the area


You don't need GPS location when the photo is obviously of Times Square, either...


of course not but that does require some type of image recognition which is a bit harder to automate (but they had GPS data in that case)


It's Instagram, you can just search by pictures conveniently tagged "Times Square," find one, and trawl through a week of the EarthCam feed of Times Square...

https://www.instagram.com/explore/tags/timessquare/


We need to create a bunch of photos with fake tags and fake GPS info to pollute AIs.


Reminds me of surveillance camera man.


How did he access camera footage?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: