So far so good...
> and the word "blue" from an analysis of patch of blue color in a painting.
What the hell?
The Total Perspective Vortex derives its picture of the whole Universe on the principle of extrapolated matter analyses.To explain — since every piece of matter in the Universe is in some way affected by every other piece of matter in the Universe, it is in theory possible to extrapolate the whole of creation — every sun, every planet, their orbits, their composition and their economic and social history from, say, one small piece of fairy cake.
- Douglas Adams
Article is behind a paywall:
If you get any results, just a reproducible blip, then try things you haven't made yourself in a controlled environment... but if you can't even get it there, it seems kind of pointless to mess around with really old things.
That's my layman armchair perspective anyway, but from the armchair it makes sense :D
I have no science to back this up. It's just a hunch.
Possibly mortar in walls could record the workers taking. All sorts of pottery start out malleable, so they might be candidates. Cave paintings might be an option - wonder whether being finger painted would leave biological fingerprints behind - heart rate, for instance.
Pretty wild idea. Here’s hoping it has some legs.
Maybe someday we’ll learn about places like Stonehenge this way.
Olivia's smartphone could dial based on touch tones, that's totally sci fi these days.
I found some applications on the Play Store that can allegedly decode the tones, so it might not be so strange.
and his 2016 PhD thesis, which includes the "chip bag" research.
I'm interested in this research, from the angle of "Can passive surface measurements tell us what is happening inside the human body?"
Sort of orthogonal but is there a similar way to get sound from seeing someone's lip's move in a video -- would it then be possible to recover audio/conversations from old video only files?
Edit: Guess they can read lips via AI https://www.techemergence.com/machine-learning-that-learns-m...
Now getting the high quality sound from just video would be amazing.
"WiHear aims to detect human speech by analyzing radio reflections from mouth movements.
It requires individual
user to train the system extensively, and can recognize only a limited number of words (6 words) with high accuracy."
Eavesdropping is the obvious one. What others do you have in mind?
Transcribed from footage included in the documentary "The Last Journey of a Genius" (1989) by Christopher Sykes, a BBC TV production in association with WGBH Boston and Coronet/MTI Film and Video.
That’s a pretty convenient quirk if you ask me.
Their fast one is an APS-C sensor with 4k*3k Px, a shutter closing time of 1s/~120000 and a minimum shutter open time of ~1s/50000. The shutter closing time might be even faster, just reconstruction from frame overhead time values I remember, and adjusting for the share the row skew had. Check the datasheet if you like to.
If you know a mechanical shutter that can do such, I'd like to know.
This, by chance, allows laser-flash illumination of objects that are behind a close wall of fog, as you can keep the shutter closed while the light travels to the further away object of interest. You will have the blur, but no longer the massive constrast loss due to light pollution. If it's not as bad as fog, and just e.g. normal rainfall or such, you lack the reflection artifacts that would be common, and only retain the refraction artifacts from the light passing through it by necessity.
These sensors do lack a little dynamic range, but you can compensate with some slight trickery, see the datasheet, to get ~15 stops out of this particular device.
Rolling shutter has applications, but videos for users of low knowledge and high ambitions is not one of them.
c * 1 s / 50,000 is still kilometers, so I’m not sure I understand how this shutter can do what sounds like using time of flight to selectively illuminate stuff at a specific depth
Most of the shutter time is used to copy the data to the shadow pixel, but just releasing the dark pull won't take long.
You just need to broadcast sufficient noise to cover the signal, i.e. the S:N ratio is such that the receiver (crisp packet, microphone, etc.) can no longer see the signal above the noise floor. Now, this might mean broadcasting a loud noise signal, which could overwhelm your (or other friendly) receivers (ears, etc.) so optimal placement of the jamming noise source becomes an issue.
Generally, you want it closer to the threat than you, so that distance attenuation keeps it bearable for you but still jams possible recording devices. Or, introduce high bandwidth noise vibrations into the surfaces of the area you are in, such as windows or walls. Anyway, it's very possible and in use today in secure facilities that must be protected from audio eavesdropping.
Maybe if you simultaneously played back segments of dozens of conversations of the participants talking. That would certainly be confusing for the participants.
As an exercise in using tertiary effects to pull in signals you might otherwise be prevented from receiving, well that is pretty cool.
Will you only speak of sensitive subjects in rooms with no windows? Will we be silent in public? These are issues where technology, politics, and human rights intersect.
I suspect that most footage already stored is similarly lossfully encoded, and that this technique isn't possible on it.
Minor nitpick, but the UK doesn't have a CCTV network. It has a huge number of privately owned CCTV cameras and a relatively small number of CCTV cameras operated by individual local authorities and police forces. The privately-owned cameras aren't joined up in any useful way and are often of very poor quality; the publicly owned cameras are overwhelmingly used for real-time monitoring of busy city centre locations.
Installing CCTV cameras is cheap and easy, but usefully monitoring them is expensive and difficult, even with whizz-bang CV algorithms. I'm deeply sceptical as to how useful any state-level CCTV network would be for mass surveillance. 20 million 4K/30fps cameras would produce something in the region of two exabytes per day; just storing that data would cost about $7bn per month.
F. SCIF Window Criteria
1. Every effort should be made to minimize or eliminate windows in the SCIF,
especially on the ground floor.
2. Windows shall be non-opening.
3. Windows shall be protected by security alarms in accordance with Chapter 7 when
they are within 18 feet of the ground or an accessible platform.
4. Windows shall provide visual and acoustic protection.
5. Windows shall be treated to provide RF protection when recommended by the CTTA.
6. All windows less than 18 feet above the ground or from the nearest platform
affording access to the window (measured from the bottom of the window), shall be
protected against forced entry and meet the standard for the perimeter.
Reconstructing audio from video requires that the frequency of the video samples — the number of frames of video captured per second — be higher than the frequency of the audio signal. In some of their experiments, the researchers used a high-speed camera that captured 2,000 to 6,000 frames per second. That’s much faster than the 60 frames per second possible with some smartphones, but well below the frame rates of the best commercial high-speed cameras, which can top 100,000 frames per second.
Basically, it's more the rate of _scanlines_ that matters for this technique, and the quality of the image sensor used. The rate of full frames isn't the limiter.
While this audio reconstruction wasn’t as faithful as that with the high-speed camera, it may still be good enough to identify the gender of a speaker in a room; the number of speakers; and even, given accurate enough information about the acoustic properties of speakers’ voices, their identities.
Saying it may still be good enough pretty strongly implies that the quality is quite low.
That is exactly why almost all DOD secure areas are windowless.
I'm pretty sure that a bigger factor in why secure facilities have limitations on windows (and especially on ground floor windows) is physical security.
Of course. I mean, that's been obvious for at least a decade.
> Will we be silent in public?
There's no need to be silent. However, one must be aware of surveillance risks, and act accordingly.
> These are issues where technology, politics, and human rights intersect.
For sure. But just wanting privacy doesn't work. And you must always deal with what's so.
Its hard enough to discern intelligible speech from many people who are standing right in front of you.
Thinking atomic bombs :(
The same ones which have prevented large scale open military conflicts involving superpowers for the last 60+ years? Atom bombs have likely saved more lives than they have taken, if we had conventional wars with modern technology without MAD.
How do you weigh the certain death of millions vs. peace with a small chance of utter annihilation? I don't know, but I don't think it's as easy as you say.
I frequently encounter people who believe that nuclear energy was harnessed initially for power generation and then co-opted for destructive purposes. The first nuclear reactor was created to enrich uranium to make a bomb.
"It may also have a lower resolution video streaming capacity."
But the new generation "might" have a resolution of 1m. which is insane.
Then again, good luck knowing which square meter of the 1/3 of the earths surface you can see, has the crisp packet in.
I still think there will be a place for good old bribery corruption and sex spy techniques for a while yet.
Still impractical to get sound vibrations from that. But a drone with a laser would work for windows. Think listening in on a conversation in a car.
Still all this tech is useless without knowing where to point it when. Which usually comes down to human led intel and intelligence led tasking.
I think ... when AI starts deciding which conversation to follow or record then ... we'll I for one welcome our new robot overlords