How would this be used for surveillance? You do realize that this isn't reading people's minds/internal dialog right? It requires making specific physical motions, as if you are actually speaking. If they are already talking, then why bother with this when you could use directional microphones or even acoustic phased array?
Furthermore, as someone who has worked with radar/remote imaging, it is very difficult to spy on people with a technique like this without some device right in their face, especially if the subject is moving around. And it's likely that lots of calibration and characterization are required to produce good results that will vary drastically from person to person depending on their physiology.
As to the possibility of some state "top secret" technology that can do radar imaging with range and fidelity that are miles and miles ahead of research or less-classified military technology: why would they use such amazing tech on something like this? There are also established equations that define what is physically possible. Many radar systems today are basically very close to the optimal performance given constraints of real world semiconductor materials, as in, they fly close to the Shannon channel capacity limit. It's just not possible to make magical radar devices.
Eric Schmidt: However, our report says that it's really important for us to find a way to maintain two generations of semiconductor leadership ahead of China. Now, the history here is important. In the 1980s, we created a group called SEMATECH. We had a bunch of semiconductor manufacturing in America. Eventually that all moved to East Asia, primarily Singapore, and then South Korea and now Taiwan through TSMC. The most important chips are made in Samsung and TSMC, South Korea, and Taiwan. China has had over 30 years to plan to try to catch up. It's really difficult.
Eric Schmidt: We don't want them to catch up. We want to stay ahead. We call for all sorts of techniques to try to make sure that we rebuild a domestic semiconductor and semiconductor manufacturing facility within the United States. This is important, by the way, for our commercial industry as well as for national security for obvious reasons. By the way, chips, I'm not just referring to CPU chips, there's a whole new generation, I'll give you an example, of sensor chips that sense things. It's really important that those be built in America.
Sensitive radar sensors plus machine learning may or may not be enough to extract people's inner speech - and would surely be a risk to "national security".
If someone like Schmidt is so afraid of this being available for other countries, it is safer to assume it is feasible and he knows it - given he has also been saying people will be able to clone themselves as virtual assistants that will outlive them.
Albeit, feasible or not, researching and demonstrating it is whole more difficult when a billionaire is "calling on all sorts of techniques" to assure their monopoly.
I’m imagining that every single human has a detectable thinking signature at quite the distance and then that is simulated in a virtual world with machine learning predicting every single person on earth in real time, with forward propagation of scenarios.
Where did you get this idea of a "thinking signature" from? What you're saying is just simply not possible. Assuming people are completely deterministic at the nanoscale, you cannot propagate forward even half a second without accounting for the stimulus from the entire world. You also cannot simulate people's minds without knowledge of basically what their every molecule are doing. Machine learning won't help you here very much.
EDIT: Looking at your profile after responding to another comment by you, it looks like you believe there exist some sort of vibration that seem to be different from the physics concept? What is the vibration that you understand?
No, remote mind reading is not possible. Local mind reading is also not possible, unless under very controlled conditions, it is possible to get a vague understanding as to which part of the brain is working hard, and even "predict" human decision a few seconds ahead.
Remote mind reading is also not really possible with thermodynamic and information theory laws. For example Shannon's channel capacity theorem, used extensively in telecommunication engineering, basically mean that even with the best antenna and best receiver and best coding and generally ideal conditions, it is not possible to receive significant data from a human brain in real time. The human brain just isn't designed to be an antenna, and even if it is, produces extremely faint electromagnetic radiation as a byproduct of signaling with electric potential.
There are also lots of people who do not have an "internal monologue," if that means anything. Once again in summary: no, and no.
Project Summary by Varun Chandrashekhar: "I have designed and developed a speech interfaced for the paralyzed, which they can use to communicate without speaking. This device detects speech-related electrical signals from the throat and converts them into letters or words that we recognize using machine learning models."
Couldn’t a ML model generalize the information into an ontological tree that makes sense in the general case even without internal monologue?
And aren’t you being quite dismissive and definite as an absolute non possibility vs it’s possible up to a measure limited by technology.
Given that something like neuralink works even at the level of today then you have the possibility to predict and mechanize systems based on thought patterns. It becomes a limit of measurement and at what distance. Electrodes in the brain vs whatever signal processing technology avails itself over time.
P.s The mind and cranium work great as an inverted antenna… just put your keys next to your head for 25% increased range of signal when finding your car. :p
https://www.forbes.com/sites/davidhambling/2021/07/06/ufos-p...