Hacker News new | past | comments | ask | show | jobs | submit login

This was an inevitable outcome of the advancement of technology. I would argue that we lost trust in all mediums a long time ago it is just now being realized by the masses.

But as usual, we shall adapt and overcome.




> But as usual, we shall adapt and overcome.

I don't believe this is a problem we can "overcome". We will need to learn to live with the "alternative facts" being more prominent than now, but I'm not looking forward to it.


> I don't believe this is a problem we can "overcome".

Digital signatures can not remedy this problem? When you login to your bank how do you know you are logging into your bank? In the future a recording without signatures will be like a bank login without https is today.


The only thing which signatures / https provide is ascertaining the identity of the other side, it won't help you in determining whether the recording is fake or not.

For this to work, you need to have an already trusting relationship with the media. Like, ok, I can trust NYT, so I will trust videos signed by them. But another person distrusts NYT and trusts only Truth Social. In the past, we could at least agree on basic facts like January 6th actually happening, but I think this generative AI will make laying out the facts much more difficult or even impossible.


You raise a valid point that digital signatures and HTTPS alone cannot guarantee the authenticity of a recording. However, modern smartphones and other mobile devices have the capability to provide stronger assurances about the originality of recordings through the use of tamper-proof secure hardware.

Many high-end smartphones, such as iPhones and some Android devices, incorporate secure enclaves or trusted execution environments (TEEs). These are isolated, tamper-resistant hardware components that can securely store and process sensitive data. When a recording is made on such a device, the secure hardware can associate the recording with additional metadata, including the specific date, time, GPS coordinates, and user account information. This metadata is cryptographically bound to the recording itself.

Furthermore, the device can digitally sign the recording and its associated metadata using a unique key stored within the secure hardware. This digital signature serves as a testament to the recording's originality. Companies like Apple or Google, who manage the secure hardware and signing keys, can then vouch for the authenticity of the recording.

While this approach doesn't completely eliminate the possibility of fake recordings, it significantly raises the bar for creating convincing forgeries. Modifying the recording or its metadata would invalidate the digital signature, making it evident that tampering has occurred.

Of course, as you mentioned, trust in the entity verifying the signatures (e.g., Apple or Google) is still required. However, this trust is based on their reputation and the security measures they employ, rather than on the content of the recording itself.


No such thing as tamper proof hardware, only tamper resistant hardware. Also the whole "sign what came from the sensor" idea is widely known to not work because you can easily record a playback of doctored footage. Lots of LLM-isms from this comment too.


> you can easily record a playback of doctored footage

You believe this is easy when the device has multiple recording sensors and multidimensional information (such as spatial information, changes in focus sensors durning recording, etc) is part of recording that is digitally signed?


Who's proposing such a device to get widespread adoption? I've heard of sensor data signing [1] but not what you're describing.

[1] https://pro.sony/ue_US/solutions/forgery-detection


The concept of sensor data signing to authenticate videos and images captured on mobile devices is still an emerging technology, not yet widely adopted. However, as AI-generated synthetic media becomes more prevalent and potentially problematic, solutions like this may gain traction.

The key idea is to leverage the array of sensors built into modern smartphones and tablets - accelerometer, gyroscope, GPS, WiFi/cellular signal data, etc. - to cryptographically sign the sensor readings along with the visual data itself at the time of capture. This extra layer of verifiable sensor data would help establish that a recording originated from a real physical device in a particular place and time, as opposed to a purely digital fabrication.

Historically, technologies like digital signatures and public key cryptography started out in niche military/government applications before becoming ubiquitous in the computer era. In a similar way, sensor-level authentication of audiovisual media could follow an adoption curve driven by the growing need to combat sophisticated AI forgeries.


I know I’m logging into my bank because I initiated the connection, and refuse to believe anyone in any other context who claims to be my bank. People are routinely defrauded by scammers who claim to be their bank, and banks are routinely scammed by people who claim to be an account holder.


> I know I’m logging into my bank because I initiated the connection, ...

Just because you initiated the connection how do you know the other end is you bank? Do you trust every internet company that carries your packets to the bank? Trust their employees? Trust their security practices? Do you trust firmware on all the devices involved?

> People are routinely defrauded by scammers who claim to be their bank,

I have read about this in the news just like I read about snakes with two heads etc yet I have yet to meet someone that has had this happen to them. What fraction of people that you know have had this happen?

Could it be that these people believe like you do that "I know I’m logging into my bank because I initiated the connection" as opposed to checking the digital signatures on the connection?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: