Hacker News new | past | comments | ask | show | jobs | submit login
Preserving trust in photojournalism through authentication technology (reutersagency.com)
58 points by rbanffy 9 months ago | hide | past | favorite | 61 comments



>Editors then work with the pictures, and each successive modification or edit creates a new record in a private database (ProvenDB) that is indexed to the original registration. This allows edit logs to be kept private, but these logs also have their authenticity records registered on the Hedera public ledger.

How exactly do you keep the edit log private without completely losing the chain of authenticity here? One of those edits might be to, say, double the size of a plume of smoke or change a signal flare into a set of missiles[0]. All you have is a promise that at some point, someone opened a real image in some kind of editing software.

The inclusion of a bunch of blockchain nonsense to store the verification also smacks of amateurism. You don't need a public chain with financial costs attached to secure photos you validate yourself. You can just publish a bunch of hashes and a hash of the prior block. Certificate Transparency does this and it works.

[0] https://en.wikipedia.org/wiki/Adnan_Hajj_photographs_controv...


Provided the signature of the original raw pixels is left in the final JPEG, I think Canon/Reuters can be expected to produce those original bits to any sufficiently curious/legally authorised third party, who can verify the signature to check that they are indeed the original bits. At that point said third party has the initial and final images, which is likely enough to conclude whether the edits were legit (the chain of edits themselves isn't as interesting).

Mostly agree on the blockchain shenanigans, though I suppose using an external blockchain allows you to prove that the image was taken before a certain point in time.


I don't see how this could work, unless they're using some sort of tamper proof perceptual hash (does such an algorithm exist?). If they change any of the pixels in their edits, it's not going to match the original signature. If they compress it to jpeg it'll have a different signature. And nobody's going to request the huge RAW files for every photo.

So this scheme can work to ensure that you're seeing the Reuters published version, not that Reuters itself didn't edit it after the camera took it.


I'm not talking about some software thing that automatically checks whether some image ("the edited image") is "similar enough" to some other image ("the original"). I'm claiming that their approach gives any interested person the ability to eyeball both versions and draw their own conclusions that way -- and that this is adequate for images intended for human consumption. (I didn't say the last part explicitly until now, mind you.)

Does that help?


Maybe my earlier reply misunderstood.

>nobody's going to request the huge RAW files for every photo

I don't see this as a big issue as there are several easy ways around it -- the simplest being that the camera could compute and store a second digital signature of a smaller, JPEG-compressed version at photo time. Another possibility is third-party businesses popping up that you (or even high-quality news agencies themselves) pay to do the human verification for you.


Usually there's not just one jpeg of a photo, but several. The camera might do one at a very high quality level, but that's going to be different than a downsampled one served over the web, or the mobile resolution one, or the color corrected one, etc.

Regarding your other comment, sure, such a manual system might be possible, but that relies on either Reuters publishing all of their unedited raw photos (which they probably won't) or a third party photo escrow service with all the rights to the Reuters originals. Neither seems likely.

I don't think this is an easy a problem as you might imagine? Perceptual hashing is a thing but there's no guarantee of forensic accuracy there.

Reuters has a history of doctoring its photos: https://en.wikipedia.org/wiki/Reuters (photo controversy section)


Perceptual hashing isn't relevant here in any way that I can see. That step just doesn't need to be automated in any way.

You have the final image (your browser downloaded it to display the page). Their new tech allows you to acquire something that you can be cryptographically certain is the original image. You can then look at the two images, and judge for yourself whether they look similar enough. Where does perceptual hashing come into it?

ETA:

>Regarding your other comment, sure, such a manual system might be possible, but that relies on either Reuters publishing all of their unedited raw photos (which they probably won't) or a third party photo escrow service with all the rights to the Reuters originals. Neither seems likely.

No it doesn't -- they just need to provide those photos on demand. (If they don't, then their system is a hoax -- like giving someone a key that doesn't open any lock.) See my other comments in this thread.

>Reuters has a history of doctoring its photos: https://en.wikipedia.org/wiki/Reuters (photo controversy section)

Didn't read the link but based on another's reply comment, the added smoke is exactly the type of fraud that this new system will detect the first time anyone bothers to check that photo.


That last comment is a tad disingenuous - a photographer cloned extra smoke from attacks in a fairly obvious way that was quickly picked up, and another was a crop, fairly standard practice. Hardly a “history” of “doctoring” pictures.


I disagree. Those were substantial edits that exaggerated and biased the reporting. And those are just the ones we caught.

It's not OK when reporters or papers do things like that.



This is a misguided attempt that will fail - all signing the image in camera does is verify that those pixels were sent by the camera sensor.

It’s trivial to work around this and create inauthentic signed images by staging a fake scene, or doctoring an image and then feeding the doctored image into the camera sensor.

For more details see https://www.hackerfactor.com/blog/index.php?/archives/919-Cl...


Fake scenes and misplaced cameras are old problems that we already have some solutions for. Since the idea here is to solve the separate and new problem of deepfakes, I don't think this criticism makes a strong case against digital signatures in photography.


It adds a layer of difficulty for fakery. At the very least, such a signed image is a better "source" of truth than an unsigned jpeg from Telegram. We will enter and perhaps are already in an era where you need to build a habit of verifying photographs that proclaim some form of truth.

> It’s trivial to work around this and create inauthentic signed images by staging a fake scene

I'm not sure what the implication of this is. Reuters is attempting a basic technological solution to a technological problems (images can be duplicated, altered and reshared by non-Reuters parties for malicious purposes).

Someone staging a fake scene is not a technological problem that can be solved (though I imagine some amount of depth information and geo-data in the image would help)


Unfortunately, you don't need a fabricated scenario to convince people to believe the opposite of what is true. Media outlets routinely select which facts to report, influencing public perception. While technical professionals are often meticulous in verifying the accuracy of information, even they can't employ Bayesian methods to arrive at the truth if the evidence presented to them is already filtered.


That would require a conspiracy by someone with physical access to the camera. Spoofing the GPS would require even more elaborate techniques. At a minimum if you trust Reuters and their camera people, you know the images taken were taken by them in their custody at those locations. That’s a lot more than we have now.


> Spoofing the GPS

You can already do it now. Somebody even sells premade ones because some people what to screw up Pokemon go players. And it even already caused safety problems. It's the easiest thing of the whole part. Modifying camera require skills. Spoofing gps can be done with a gps spoofer that you may be able to buy somewhere.


If only GPS signals were cryptographically signed


Cryptographically signing GPS signals makes it harder to spoof GPS coordinates but does not stop it completely.

A spoofer could obtain signed GPS signals from the time and location where they are claiming to have taken a photo and combine them with a photo taken in a different time and place.


I was thinking that if secure timestamping could be done quickly enough, it might be possible to both bound the event by the future-light-cones of the spacetime points the GPS satellites sent the signed signal, and the past-light-cones of the timestamping event, but I suppose that’s probably not feasible (even with signed GPS signals).

Now, maybe on an interplanetary scale (with various sources playing the role of the GPS satellites, and various locations doing secure timestamping) it could be enough in order to be able to establish (beyond plausible hope of spoofing) a kind of rough region in space-time, but, maybe not very precise.


That's already what we have now. Reuters stakes their credibility on this. They just don't have any.


But any trickery employed is traceable back to that specific camera, meaning that the camera owner's reputation is on the line if it's discovered.

Will that reputational risk actually have any teeth? That's up to us (society at large).


Yeah, couldn’t you simply rephotograph a doctored image off a sufficiently high-res monitor?


You could, but if you got busted doing that, your reputation would take a hit. Everyone would know it was you because you own the camera with that private key. (To wriggle out of it, you would need to claim that either your camera was stolen before the photo was taken, or that Canon/Reuters's private key was compromised. The former claim would be harder if there was, e.g., a second digital signature stored in the JPEG showing that you confirmed your identity with a fingerprint 10 minutes earlier.)

Also: Could you, actually? Maybe photos of monitors have "tells" that can be detected with some kind of image processing.


In the end it is not too different from the current solution. If Reuters, or a specific photographer, published manipulated images their reputation also takes a hit. There is a wikipedia article and people know it was you.


you can prevent feeding the doctored image into the sensor if the camera also senses depth, and only signs the image and depth information together. this is how they prevent a photo from unlocking face id on a phone


Agreed, it's no better than using a fingerprint to secure a device.


Seems harder then using photoshop or AI.


Living in a world where we only trust photos from official photojournalists because everything else may have been doctored is not a good outcome.


We already live in a world we can't trust photos from everyone who calls themselves official photojournalists.

I've seen photos of the recent nazi rally in Orlando that were very slightly manipulated to darken the skin of participants to imply they were, in fact, "infiltrated BLM or Antifa".

Anyone could use this system or a similar one - if the sensor signs the image with its private key anyone can trace a raw image to a specific sensor and, therefore, to a specific camera. If that image (or enough hash bits it becomes impractical to fake the data via collisions and still have a recognizable image) is placed on a public ledger, you can prove those pixels are yours and were taken at or before the moment in time they were added to the ledger. If the camera is part of a larger system, such as a phone, the software can add your own credentials to the raw image data, proving it was you who took the picture (unlocking the phone with, say, your face).

I would not, however, want my personal family photos to be trackable like that. EU restrictions on images already took away a lot of my joy of photographing people.


A couple of questions that come to mind?

1. How will the signing key in the camera be protected? Are camera manufacturers a SPOF?

2. Based on the following quote:

>Editors then work with the pictures, and each successive modification or edit creates a new record in a private database (ProvenDB) that is indexed to the original registration.

How do you ensure that one of the modifications was not "replace all pixels with the one generated by AI", or even "Replace Russian flag with Ukrainian flag"

I only see this possibly working if the new organization releases the raw signed photos to the public. Otherwise, the edit step can do anything.


Including a digital signature of the raw original in the final image is implicitly a promise that the raw original can be produced on demand (nonrepudiation [0]). Maybe not just anyone can make such a demand, but there would need to be someone (e.g., a court judge) who can.

0: https://en.wikipedia.org/wiki/Digital_Signature#Non-repudiat...


If the edit chain is private, does this mean that there's no way to validate the edits Reuters itself did?


I would expect as a baseline that some third party (maybe not necessarily Joe or Jane Public, but at the very least a subpoena-wielding lawyer) could require Canon to produce the original raw camera pixels, and check that they match the signature in the final publicly available JPEG. At that point, this third party would have access to the first and last stages of the image processing pipeline, which is likely enough to decide whether the edits were legit.


Didn’t read all the details, but isn’t the point of this thing to store the original in a publicly accessible storage system (storj and filecoin is mentioned). There should not be a need to involve Canon or a third party. This is for verifying published photos after all.


The main point is to prove -- to you, or to any interested third party -- that the photo in the news story currently open in your browser came from a particular bunch of raw pixels from a particular camera via a series of edits done by particular people, with the identities of both the camera and all the editors being recorded in that final browser JPEG in such a way that their involvement can't later be denied by those parties. The reasoning is that this traceability will make photographers and photo distributors more responsible due to the threat of reputational damage if they're discovered to be up to something.

The technical magic is in a (pretty old, pretty basic) crypto primitive called a digital signature, which doesn't actually require storing the originals publicly at all times -- all that's needed for you to verify Canon/Reuters's claim of photo authenticity is for them to supply the original raw pixels when someone "sufficiently important" (maybe you, or maybe a court judge, etc.) demands it. You can then check whether what they give you is in fact the true original by verifying it algorithmically with the JPEG's signature and their public key.

Storing the original in a publicly accessible place is good but kind of secondary -- it's just the most straightforward and convenient way for them to "produce it on demand".

EDIT: Clarified "their signature and public key"


Huh! I know all this from a technical point of view. I was meaning to ask what they are actually doing, since it seemed like the people on this thread were referring to judges and talking to Canon and so forth. It would be a pain to have to go to the courts to verify the authenticity of every photo, and the technical choices seems to indicate that they would publish also the original. It doesn’t make sense to me to keep the original private, except for when they need to blur or sensor parts of a photo for the privacy of people in it or depiction of violence. In those situations it would be reasonable to keep a photo private and go through the courts to verify a photo’s authenticity, but that would not be necessary for all photos.


Well, it's in Canon's interests to appear to be as open and transparent as possible about this, so I expect they would not only make the originals publicly available but also make the process of verifying as frictionless as possible -- maybe even make a phone app or browser extension to do it. I'm just saying that that's technically more than what's required for someone to trust their claim.

OTOH, yes, privacy and upsetting content are reasons why they might hold some (or all) back from Joe Public and only provide the bits to someone more authoritative.


I mean, all the attestation here does is say this photo came from this person / camera. Like a hardware-backed NFT. The trust ultimately lies with the photographer and, by extension, the org they're working for at the time. I'm not sure how different this is from Reuters just publishing the photo and archiving the link on wayback. Any future versions can choose to use that link as provenance, an eyeball is all that's needed. Complex technobabble won't stop screams of "fake news."


The underlying content provenance scheme here (C2PA) was presented at this year’s RWC[1].

[1]: https://iacr.org/submit/files/slides/2023/rwc/rwc2023/13/sli...


This is actually more important that must people might think. Thanks for posting.


This is really cool -- if done right, it has the potential to massively decrease the incidence of false and misleading photos in the world, by massively increasing the reputational risk associated with making and distributing them. I hope regular people start to pay attention to this kind of provenance, since that's where the effect will come from. Time will tell.

(Also, I'm really cool for predicting ~5 years ago that this specific tech would happen -- bringing my count of successful tech predictions to 1.)


The problem with photojournalism is not that people don't trust that the pixels haven't been doctored.

https://www.washingtonian.com/2017/01/20/searching-metaphor-...


Alternatively it could be there that there is more than one problem with photojournalism, and the article describes a solution to just one of them -- but an important one nevertheless.


> Preserving trust in photojournalism

To preserve something, you have to have it first.


So how do you do this if you're documenting something that you'd get in trouble for exposing and don't want publicly traceable back to you?


That's the neat part, you don't. And thus the "problem" of whistleblowers is solved.


Relevant, using ZK SNARKs to authenticate edits without reliance on a central point of trust[1], blockchain in this case would be optional but perhaps appreciated for distributed & decentralized storage and verification of ZK proofs.

[1] https://medium.com/@boneh/using-zk-proofs-to-fight-disinform...


That horse has bolted.

The reason is that this trust was misplaced in the first instance.


>That horse has bolted.

I don't follow. Sure, existing images can't be retroactively made verifiable, but tech like this can stem the tide from this point on, and that is useful.

>The reason is that this trust was misplaced in the first instance.

I don't follow this either. Trust in photos? It has always been appropriate to trust a photo in proportion to the difficulty of faking it. Are you commenting on the fact that it has recently become much easier to fake a variety of photos (deepfakes, etc.) but a large fraction of society isn't yet wise to this?


The trust is misplaced because while the photo might be accurate of what the photographer saw as they clicked the button, nothing says its representative of the overall scene, or that a different picture of the scene could show a very different description.

For an example, consider a side on closeup of a trump campaign speech vs a top down view zoomed out of the whole park.

Photojournalists can choose which pictures to use to make an argument, and people are too trusting that there aren't alternatives that more strongly support a different argument

The photo doesn't have to be untrue to be misleading


>nothing says its representative of the overall scene

This is a genuine concern, but one that's shared by all forms of inductive reasoning. The same concern could be raised about everything our senses tell us about the world.


Uses Filecoin


That's an odd* implementation decision.

Gpg is well established and eminently applicable here. Both the camera hardware as well as the photo journalist could sign the image data payload, and embed the signature in metadata, especially if the camera hardware accepted new keys. Trust chains could be established both via their publisher and the camera manufacturer.

But let's be real: no one will bother to validate anything.

* I'm being diplomatic


GPG is “well-established” in the sense that it’s old; it’s not considered a modern or well-designed digital signing (or encryption, or trust distribution, etc.) scheme.

I think there are significant issues with establishing the “authenticity” of digital artifacts in any scheme, but GPG would probably make things worse rather than better here.


GPG is not good now? When did that happen?


It’s been about a decade[1]. You can find older links as well, but that’s a decently old one from an authoritative source.

[1]: https://blog.cryptographyengineering.com/2014/08/13/whats-ma...


Ok. I read it, thanks. I plan to continue using GPG.


What are the modern / well-designed alternatives?


There is no “alternative” to GPG, because PGP/GPG’s problem space is poorly defined.

Modern cryptographic protocol design has moved away from “Swiss Army knife” designs: protocols and formats are now designed to do one thing well, rather than a whole bunch of things poorly and with an unintuitive user interface.

In other words: use TLS to communicate securely with services. Use Signal or another modern E2EE IM protocol to communicate securely with humans. For file encryption, use age. For digital signatures, use minisign or Sigstore.


The GPG hate is, in my opinion, overblown.

But modern alternatives would be SSH signing, signify/minisign or cosign.


Canon and Nikon both sell verification kits which can be used to cryptographically prove that a photo wasn't tampered with. Although these kits cost money, it seems it's just a flash drive with special software on it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: