> Because mmWave radar doesn’t process visual light, what it sees is not personally identifiable.
With machine learning improvements, identification and more are possible. From a 2021 paper on IEEE 802.11bf Wi-Fi Sensing, https://arxiv.org/pdf/2103.14918.pdf
> Indeed, it has been shown that SENS-based classifiers can infer privacy-critical information such as keyboard typing, gesture recognition and activity tracking ... since Wi- Fi signals can penetrate hard objects and can be used without the presence of light, end-users may not even realize they are being tracked .. individuals should be provided the opportunity to opt out of SENS services – to avoid being monitored and tracked by the Wi-Fi devices around them. This would require the widespread introduction of reliable SENS algorithm for human or animal identification.
Today, specialist devices or targeted attacks can monitor human activity through walls and closed doors. But that's a world away from the ubiquitous transparency scheduled for commodity WiFi 7 in 2024. If regulators or lawyers don't step in, homes and some businesses may need RF shielding.
Right on. The whole "mm wave radar is detail rich but gives privacy" line is a contradiction. Any data that is detailed enough to be useful is detailed enough to be a privacy concern. While it might not be the same as leaking nude photos from someone's camera, radar data is something that needs to be carefully thought out. Radar data from a car can provide information about people's driving behavior even without location data- do they follow too close, do they make reckless lane changes into tight spots, do they exceed the regular flowing traffic speed, etc. Radar data from inside a building used to track people can tell when you are at your desk or somewhere else, who you congregate with in the break room, how long you spend in the bathroom, etc. If it's detailed enough to assist you, that data can also be used to monitor you.
whatever happened to that meme about common ccd cameras picking up uv images that make out details through eg swimsuits, but hopefully filtered and removed in software before making it to the images?
Cameras usually include an infrared filter (also called a "hot mirror" or "IR cut") that reflects infrared radiation. Mostly because not blocking IR leads to wrong colours for scenes like sunsets where a disproportionate amount of IR radiation is there.
Many smartphones (used to?) exclude this filter on the selfie-camera because no matter how hot you think you might be, it's not going to be an issue.
If you want to check an IR remote for function, try with the selfie camera of your phone and not the normal one and make sure you are trying in a dark environment because the LED on the remote is going to be rather dim either way.
As an aside, the IR filter is often removed[0] from many cameras when using them for astrophotography. Canon[1] sell dedicated astro cameras that are modified in this way, along with other modifications.
I recall the panic over that back in in the day, IIRC it was near-visible infra-red, not UV - if you didn't have sufficient IR filtering over the sensor, were using flash (and the flash included a decent amount of IR), and the subject was wearing something thin, you'd potentially get unintended skin detail with the IR reflection.
As I understand more or less a solved problem on modern cameras such as on your smartphone... instead of a bulb you tend to have white LED for your flash that doesn't really give off useful (any?) IR, and there's better/more consistent IR filtering on the sensor.
I still have a Sony CD camera (as in disk not ccd) which has near IR and a IR led. Fun to play with for night recording (its intended use) and you can see though thin clothes a tiny bit but range is very limited. Skin patterns much less due to lacking contrast. Resolution in that mode is also lacking so detail is not really a thing. I can understand the problem and I think that is the reason my IR camera in my laptop has a moire pattern in the hardware to make it almost useless to the common eye. But that last part is just a guess.
Those camera manufacturers quickly eliminated that capability, likely via optical filtering. That was about 20 years ago. Consumer-grade cameras do not have that ability nowadays (bad publicity).
50/50 on privacy. Sure, privacy is cool but I pity the people whom must endure seeing the shapely flabs and curves of everyday people. That's got to have a toll on anyone's psych over time, as an "occupational hazard".
I mean, the material has a bug that wasn't discovered until recently, so we fix the bug and re-deploy the material. It's that simple. Just embed some carbon fibers or something in the swim suit, there are tons of ways to block UV.
It's like if your car seatbelts have a bug you issue a product recall. Same thing.
Not to mention changing materials would likely have continuous costs, and will impact the materials properties and longevity - potentially in ways that just don't add up to a feasible product.
And then there's the fact that this is a hugely international business, so unless we convince the whole world to do this, we'll be impacting supply lines and flexibility there too.
The whole undertaking seems like it would have an absurd scope - just to avoid adding IR filters that cameras need anyhow to achieve accurate color reproduction, and therefore most cameras already include!
Most people cannot, believe that they cannot, or would not even if they could. We can be sure of that because despite of billions of cameras in phones, pocket cameras or webcams being out there, the web is not flooded by pictures like these.
Putting filters into cameras definitely works. It is not an obstacle to the determined but it prevents most of the abuse most of the time.
> We earned this year’s award for a product targeted to launch in the fourth quarter: our Smart Health Monitoring Light. Featuring a Wi-Fi, Bluetooth Mesh dual chip, the bulb will provide a number of features, including biometric measurement tracking of heart rate, body temperature, and other vital signs, as well as sleep tracking. By connecting multiple bulbs via Bluetooth Mesh and creating a virtual map across your home, this product can even help detect human behavior and determine if someone has fallen and then send for help.
With decent processing a mm-wave picture would be good enough that you could recognise a person, so I don't think there is any inherent anonymity. The image resolution is a function of both wavelength and aperture and a mm-wave antenna can extend over a significant number of wavelengths (ie. large aperture). A stable timebase would further allow processing across time, enabling synthetic aperture.
Also, a person is not a randomly shaped object. If the processing is specifically tuned to detecting and identifying people an awful lot of degrees of freedom can be eliminated, giving more detail in those features that do identify a person.
I think you are still off about the privacy aspect. At the source, a return from a single radar pulse is pretty uninformative. But a single pixel on a camera sensor is also uninformative. Even at the source, an aggregated set of radar returns is still a privacy concern on the level of an aggregated set of camera pixel values. Not operating in the light domain doesn’t make the privacy concerns go away, it just means that laypeople aren’t going to understand the risks as intuitively.
My reaction as well. Given how many ways there are to uniquely identify individuals this isn't a selling point for the tech. It's just a different kind of camera and of course it can identify people it's just that it will not reproduce a picture of people's faces to do so.
Not yet, because the resolution is too small for now but you can count on progress.
Thinking more globally, couldn't 5G mmWave make it possible to get to know exactly how people are moving ? (let's say, someone gets out of his house, the government can know it in realtime, even without a phone, because the mmWave could map its position).
There's a huge difference between a phased array scanner and the transceivers described in the OP, but mm waves are absolutely not "privacy-preserving"
Its perfectly possible to create feature descriptors for 60gig radar, and use it to image rooms, things and possibly people.
Its a privacy problem we currently are trying to overcome in AR. The problem is that because feature descriptors are so small, researchers assume that they don't contain PII. So its going to take a scandal before its taken seriously.
Given that most of FAANG are making some sort of 3d map, I'm sure we are going to get pictures of people's bedrooms leaking soon enough.
Not to the same degree but roomba seemed to survive mapping peoples houses privacy problems, and everyone seems to love Alexa. The general public prefers features over privacy.
And lets be honest here, mmWave radar won't be used on its' own, but instead will be used to in conjunction with visible light and other forms of privacy violating sensing technology to increase the resolution of the panopticon.
This statement about privacy of mmWave radar you raised made me wonder if someone haven't already tried to implement SAR[1] with mmWave radar, and quick search reveals that indeed, this is being looked into, eg.:
Fundamentally, any radar that can be configured to capture coherent data (ie: amplitude and phase rather than just amplitude) can be a SAR. SAR is basically post-hoc digital beamforming to sharpen the image. If you have amplitude and phase information (which you can get if your radar can be configured to output raw quadrature baseband) you can do SAR processing. There's no fancy hardware required, SAR is largely a software problem.
You do also need information about where the sensor was located at each pulse, but it doesn't have to be extremely precise - there are algorithms that do iterative processing to accomplish motion compensation.
Hm, I the EU due to GDPR this services are required to make you opt in to them.
Does that mean many of this applications are outright illegal in the EU, due to not having a good way to handle the case where the user doesn't opt-in?
Thinking about it a bit more in Germany there is a law about spy devices.
Most spying on users through websites etc. doesn't fall under it do it being formulated in the pre-internet age.
But would WIFI-7 enabled routers fall under it? It's not complete unrealistic that it might happen.
The think is this law is rather strict in how illegal spy devices are to be handled => destructive disposal. (Required, by anyone owning such a device).
That could be quite a big fall out.
This law did hit some IoT devices, like some toys with integrated voice recording.
Well it's nice that the most well behaving countries (not that they are THAT well behaved) play nice -- but for countries in mmmm...Asia, Africa and the Middle East this will be a boon, sadly.
I've been somewhat disappointed that the learning material around mimo radar processing that can be done with popular TI automotive radar chips (and others) is somewhat limited after you get through with the absolute basics.
But it doesn't really go into the different schemes for sending out orthogonal waveforms from the transmitters, the design of 2d arrays, much of the basic signal processing (windowing, matched filtering), etc.
I was pleased to see https://hforsten.com/mimo-radar-antenna-arrays.html pop up on hacker news quite a while ago, which was very interesting. Goes into how to plot the beam width for a mimo array, with some nice python code. I want more, but I'm afraid I won't get it unless I read through a phd thesis. Any ideas/tips?
> Because mmWave radar doesn’t process visual light, what it sees is not personally identifiable. This makes it an ideal choice for monitoring where a camera would not be appropriate.
Is 5mm resolution not enough to identify a face?
What’s the difference between a 3d model of a person constructed from high res radar and a black and white photo?
The radar sounds even more identifiable.
Edit: Couldn’t you construct a mosaic and have much higher resolution?
I worked with mmWave radars previously, but not professionally. In my opinion, it would be quite challenging to extract enough features from the current generation of mmWave radars, such as TI IWR1642 ([1]) which have around 4 receivers and 2-3 transmitters, because the incoming data, while having a lot of temporal resolution is very limited in spatial resolution.
With greater number of antennas something like what you describe is theoretically possible, but becomes cost prohibitive.
By limiting the number of transmitters / receivers, we can have an almost perfect privacy monitoring. For example, it would be possible to detect a attack / a fight in a public bathroom, while not exposing anyone.
To an extent, but depending on the sampling rate and frequency and your ability to control the area being observed, there is still a lot of information available for modeling and biometric identification. In the extreme you can detect things like heartbeat, rate of breathing, etc.
For example, I have a CDM324 24Ghz radar module here on my desk. I set it up to 'watch' me type this comment from across the keyboard. This is an extremely simple module that I have powered by a bench power supply and the IF routed directly to the audio input on my desktop. It was sampled with audacity and amplified a bit to help with visibility. Towards the end of the recording I'm expecting a flat spot followed by regular motion followed by 'noise' as I pause motionless for ~10 seconds or so, then take about 5-6 exaggerated breaths, then resume typing.
This is with zero design, the wrong frequency for the job, and next to zero signal conditioning.
(post: i've included a zoomed in image of the 'motion demonstration' to show still/breathing/typing, then zoomed in on typing to show the detail, then a spectrogram and waveform of me reaching up to scratch my ear.)
The difference is the number of transmitters / receivers. TSA uses an array of them (from top to bottom) with narrow beams and they also rotate them and ask you to not move, so that effectively their number is considerably higher (ten of thousands), when reconstructing the image. Additionally, they use a higher frequency (160-400GHz), which additionally helps with resolution. See [1] and [2].
If we limit the number of transmitters / receivers to 3/4, the image reconstruction becomes impossible, similar how you would not really make a real photo from a few color sensors stationary looking at a scene, while a linear array of them that moves around an object would make a perfect scanner. It's a non-ideal analogy, but the best I can offer.
This is true in the sense that NNs offer a convenient way to estimate a distribution of hidden values conditionalized upon visible values. That is an approximation of what is usually understood by "a reconstruction". I would argue that a reconstruction per se would only include parts for which the credible interval could be systematically bounded within a suitable tolerance threshold.
Google knowing every time I’m in the bathroom and keeping a detailed log of the times is a far cry from someone in my family happening to notice I go in there once.
I have no opinion on your reply, I was discussing technical feasibility in response to someone claiming it was infeasible. The normative claims aren't germane or particularly interesting to me.
By not exposing anyone I meant reconstructing an image. Identification is a different story and there are multiple ways on how to achieve that (heartbeat classification is an obvious one).
If you have ideas how to reconstruct an image with 4 receivers / 2-3 transmitters, please, buy a dev kit from TI (they cost <$200, https://www.mouser.com/c/?q=mmwave) and show us a demo.
I have a lot of experience with TI's IWR6843AOP chip and dev kit. It has 3 transmitters and 4 receivers and it'd be impossible to create a facial reconstruction with it.
I've just used it for the point cloud data which can be streamed via USB. If you want to stream the raw data I think you'll need https://www.ti.com/tool/DCA1000EVM. it has a 1Gbps ethernet port on it.
What about correlation? If you have other information that can tell you who is alone in the bathroom (common area CCTV, log of RFID key usage, etc) and your radar can infer that something very private is taking place there because the person thinks they are alone, then isn't that still a huge breach of privacy?
I really must have phrased my message more clearly. I meant reconstructing images, not identifying who is inside. The latter is quite feasible, and you're correct that extra data helps with that as well.
Just like with browser pinning, we have a variety of signals, which combined together give a very high rate of correct identification.
The visual signal there is utterly meaningless for assessing how much is exposed.
For analogy, take https://en.wikipedia.org/wiki/RANDU: it was very popular in its day, and its distribution looked good to the naked eye, but plot it in more than two dimensions and you see that it’s actually extremely terrible, failing spectral tests badly.
The information has been captured; the fact that one visualisation of it doesn’t expose that information is irrelevant and gravely misleading (as in: I don’t want someone that hasn’t already realised this working on this kind of stuff, because they’re dangerously ignorant and/or naive; I would rather they stop, and go and learn about the risks in detail before continuing).
> As you can see the radar doesn't reveal too much, while still being able to discern a lot of what is going on in each scene.
No, you can't see that, because it's a video of a specific visualisation of the data from one specific sensor.
If it can see through walls and measure biometrics like heart rate, mass and body type, build a 3d point cloud of limb positions and measure vibration, then it's more privacy invasjve than a camera, not less.
You don't need to identify a face; you can identify people by Gait. If you can resolve something as big as a leg you've got enough, as long as you have it over 10 seconds or so.
It's odd that people care so much about technology "identifying them" when they have cell phones in their pocket and license plates on their car. There's not much of your daily life left after that, but I suppose it makes them feel good to imagine they're important enough to be tracked.
I opt into carrying a cell phone and it’s when I want. I don’t opt in to my neighbor’s Alexa tracking my comings and goings by scanning through my walls.
There is nothing odd about it, and it has nothing to do with being "important enough to be tracked".
It has become almost impossible to participate in society, without every aspect of your life being recorded in some way.
My Bank sells my transaction data to third parties to track my credit history and sell advertizing - they do this without my consent, or rather if I disagree, they will not have me as a customer. I could switch banks but it's basically the same everywhere. All I wanted was a place to store and transfer money.
My phone company sells my location and call data the government and to marketing companies - they do this without my consent, or rather if I disagree, they will not have me as a customer. I could switch providers but it's the same everywhere. What I actually wanted was to be able to make phone calls and have data connectivity on the go.
My ISP sells which websites I visit to marketing companies - they do this without my consent, or rather if I disagree, they will not have me as a customer. I could switch ISPs but it's basically the same everywhere. All I wanted was to have data connectivity at my home.
My airline sells my flight data to the government. They do this without my consent. There is no alternative. I just wanted to visit some relatives.
The government tracks all my phone calls. They do this without my consent. There is no alternative. I just wanted to call my friends.
The government tracks where and when and whom I send emails. They do this without my consent. There is no alternative. I just needed to communicate for my business.
When I go out in public, my image is records by a multitude of cameras, by a multitude of actors. They do this without my consent. There is no alternative.
My licenceplates are scanned, depending on my locality, more or less regularily. What I wanted was to make going around a bit easier over longer distances. There is no alternative.
My medical records are ... well there's something called HIPAA, but the reason it has to exist in the first place is that some seedy actors are trying to get their grubby little hands on this most private information of mine. I do not consent to this. I am sick. There is no alternative to medical care.
I go out in public and depending on where I go, cameras are being equipped with all sorts of biometric detection capabilities. All I wanted to go out for a drink, suddenly I am in some database.
I go online to browse the internet. No further explaination needed.
All I wanted was to have wifi connectivity at home. Some asshole has crammed functionality into the standard for wireless connectivity, that makes it possble to track where my body is. There is no alternative.
And so on and so on and so on.
All of these things taken together create an information asymetry. One by one they aren't neccessarily that terrible, but by accumulating all this information - for which there is constant pressure by the more authoritarian elements of our societies - creates a digital me, that absolutely has an effect on my day to day life.
Easiest example is my credit score, I am sure one can think up reasons to justify it's existance. But the point is, that this device exists as an incarnation just for me - without my consent at all. It's not that I need to be very important to have a credit score. I just have one by default.
It does not make me feel "important" to be tracked. It makes me feel oppressed. It's the opposite of important. Being treated like this, decisions being made about me by some usually invisible faces, makes me feel like cattle. I am being treated as a good that can be sold, and from which value can be extracted, in other ways than the exchange for goods and services for money. I am the commodity itself. By default I am presumed guilty, and evidence is being collected against me, even if I will never violate any law of society.
My right to live as a free interdependent member of society are being violated by the people who think they can use me and the data they can glean about me for whatever their purpose is - and they do this without even considering my consent.
Information asymetry enables control. It's the anti-thesis to freedom.
It has nothing to do with feeling "important" and the choice you are implying people have is not a free choice. It's forced.
> All of these things taken together create an information asymetry. One by one they aren't neccessarily that terrible, but by accumulating all this information - for which there is constant pressure by the more authoritarian elements of our societies - creates a digital me, that absolutely has an effect on my day to day life.
i recall a comment by someone claiming to be working in an antarctic base- everyone working outside would be wearing the same suit, but you could subconsciously recognize who you were looking at, at great distances, simply by the gait.
It doesn't even need a radar sensor. Everyone moves in a different way, so that extracting patterns from the phone accelerometer can help to identify a person.
Two people carrying the phone in the same pocket or hand will produce very different patterns that can be analyzed to extract the different behavior. All it needs is one chance in which that pattern is successfully paired with the person's identity, and from that moment on all their activities can be identified.
In terms of an IoT device or phone sensor form factor, not yet, but in the future I expect it will be possible.
Airport body scanners use millimeter wave radar, and they use software to mask the image so it isn't personally identifiable. There are even passive versions that just use background radio waves in a compact form factor:
Each time something comes up like this, I call for jammers. If someone wishes to exploit the shared physics of the environment, I'll have no qualms to do the same.
> This work explores the possibility of countering CSI based localization with an active device that, instead of jamming signals to avoid that a malicious receiver exploits CSI information to locate a person, superimpose on frames a copy of the same frame signal whose goal is not destroying reception as in jamming, but only obfuscate the location-relevant information carried by the CSI. A prototype implementation and early results look promising;
tl;dr is that the technology is very very different than a phone camera. So it's not the same as an IR photo. E.g. Google Pixel 4 has a radar chip AND IR dot projector+camera for face unlock. Presumably, if you could do face identification with Pixel 4's radar sensor only they would not have needed the IR tech.
"One of mmWave's major advantages is its privacy-preserving nature. Because mmWave radar doesn’t process visual light, what it sees is not personally identifiable."
I've been following and building mmWave radar stuff for a little over a year now and I wanted to write an intro to help anyone get up to speed on how it works, why it's special, and what's been happening in the space.
Please write up how to make a simple passive detector so people can have a chance to know if they are being tracked.
This is going to be in every home in 10 years embedded in phones, TVs, smart speakers. All public venues. Trams, buses, subways. Everywhere. But most importantly homes.
Every. Single. Person. 24/7. Breathing rate. Abnormal motions. Current location. Current activity.
Maybe, just maybe if we raise a some awareness we can curb the proliferation a bit.
Surely I can't be the only one who sees this as a nightmare.
The VR headset application appeals to me. If headsets could be dumb terminals they would be a lot lighter and untethered wouldn't suffer from performance loss.
Power consumption for radar can get down to around 0.5mW (edit)
But I imagine for 60GHz WiFi it's probably pretty power hungry.
At this point there's not much noise in the mmWave spectrum (it won't be competing with 2.4GHz and 5GHz anyways). And mmWave doesn't travel that far which is one of the main reasons the FCC unlicensed it in the first place.
Checking some point to point wireless radios and it doesn't look like 60Ghz needs much more power. Well, it might be a few times as much, but definitely the same order of magnitude.
I've been waiting for someone to finally make a consumer stud finder that works and this just might finally happen after this deregulation of the spectrum.
The construction industry would hand over wheelbarrows full of money for a BLE phone-paired puck that located studs and pipes behind drywall. Someone could even sell a "pro" version that combined the data from multiple pucks to render a 3D point cloud of the inside of the wall on the user's phone.
Most studs have things (like drywall) nailed to them, and the nails are usually made of iron. So, all you need is a high-power rare earth magnet. I use a particularly strong refrigerator magnet.
If you want to splurge (or are willing to couple trips to the coffee shop), you could buy one (or two) of these:
I was tipped off to these by the someone in the construction industry. He buys a few whenever they're in stock, so that he can have them in every toolbox, lose them, etc.
Note that, once you find a nail head, you can just leave the magnet hanging in place. If you have two, you can hang both, then use a straightedge or chalk-line to mark the point between the two where you want to make a hole. The fancy radar puck thing can't do that. (Maybe, someday, fancy AR goggles will solve that problem, but we're a long way off from that.)
A technical error in one of the diagrams: Hall effect sensors are not binary, but one-dimensional, and are used so; their output voltage is directly proportional to the magnetic field strength. As for PIR sensors, fundamentally they’re certainly not binary, but I have no idea if they perhaps expose only a binary signal. A better example of a binary sensor might be an on/off switch.
analog PIR is typically a differential signal between two receivers. The receivers are placed behind a fresnel lens (where the lenses are designed to point into each of the sensors). So when an IR source (like a person) walks in front of the sensor it creates a large spike, because they are being compared against a low IR area.
There is a form of PIR sensing where it behaves like a thermal camera at a very low resolution (sometimes as low as 2x2 pixels)
> mmWave radar doesn’t process visual light, what it sees is not personally identifiable
that is not how "personally identifiable" works
as long as you can (re-)identify a person with it it's personally identifiable data. Even if identification is done by using a mmWave radar and some simple ML.
Humans come in greater variety than airplanes but recognition, especially with things like gait analysis, should be quite possible. Now, I'm skeptical that a photo of a person contains enough information to allow for mmWave recognition though, but I'd assume video does.
Interesting, though it's enough that two independent mmWave radars can re-recognize a person to count as
identifiable. No need to involve visual data. Especially if mmWave radars become common place.
"The promise of ambient computing is that technology gets out of the way while becoming more helpful at accomplishing what we want. It anticipates our needs by understanding the context of our environment and situation. It serves us, rather than us serving it. "
This is the optimistic case. Perhaps more likely is that 'ambient' computing will smother us in surveillance, ads, 'nudges', restrictions (you seem tired or drunk, you may not drive), etc.
Challenges to keep devices affordable aside ($3.65 is a lot for a BOM add on), Radar is one of the first non-human senses I’ve had to design for. From an ML perspective it offers significant advantages over static images, being independent from ambient lighting, and sample rates are higher than low cost cameras without ISO noise.
> mmWave radar used inside the cabin as a sensor to detect a child left behind in a hot car
Call me a cynic but since the power button stopped being a power button I have trust issues. In the end we'll have mmWave chips our company devices that can tell middle managers if we've been at our desk for all 28,800 seconds of the work day and exactly how hard we've been working, that's a more likely outcome than saving a kid in a hot car.
You could argue that today's circumstance is why you DON'T need explicit laws prohibiting workplace monitoring.
Anyone with the desire could set up such systems today. But they don't. Because they're expensive, inaccurate, and ultimately not all that useful in the social context. There are places that do employ such monitoring, but they're a corner case.
Trust is a core commodity in the workplace. It's usually more profitable to establish a trustworthy workgroup to accomplish some goal than it is to accomplish some goal WHILE monitoring the workgroup's every move for infractions.
> There are places that do employ such monitoring, but they're a corner case.
Don't amazon warehouses already monitor workers in this much detail? Just yesterday I read someone talking about how managers get reports containing, amongst other things, how often a person was standing still. Lots of companies squeeze their employees like this, especially low paid employees.
I have a theory / idea I call MOOP - massive open online psychology - the idea is that our phones / radar devices monitor our daily interactions - tone of voice, body gestures - to get a state of our anger / love / etc between our families (and work). And then uses simple epidemiology to help guide / train us.
The basic idea is your phone becomes your life coach.
This "privacy protecting" radar (which does not seem privacy protecting at all, just avoids leaking images which is just bad PR avoidance) looks like the same direction.
I agree this will happen, I suspect Apple is not only in the best position to exploit mmWave hardware but already well down the road in related interaction design.
Apple Watch already offers limited physical sense-based life coaching in the form of hand washing duration tracking and feedback.
Apple Watch’s assisted interaction with UI that quietly debuted in iOS 15 was an important set of physical sense-based interaction primitives in production.
They suggest to me the company has likely progressed capabilities sans mmWave.
I assume moop is a slang term then? what's the burn?
And yes, I am sure many companies are thinking (or at least have people in them thinking like this). A few issues are the legal and cultural framework (I am quite happy for my personal data to be shared with accredited Health (ie NHS) researchers, but not with the Apple Watch product manager)
Epidemiology at this level is going to have transformative effects, but it needs to be trusted. I mean the only difference between a utopia of guided humans and a surveillance state is who gets the data and what they do with it. the people in the free country abs the ones in the oppressive totalitarian state will be wearing the same devices with the same radar.
At Burning Man, moop is an acronym for Matter Out Of Place, a reference to the leave no trace principal of the event. People will run after a small piece of material. Again, NBD, but in tech circles this might be the first thing people think of with that acronym.
I do think Apple captures plenty of meta data, but I do not know how much physical motion is getting sent back. One of the goals of the secure enclave was to store biometrics. And despite some notable concerns, the company has generally aligned its business model and branding to be privacy focused.
I can't think of another large company I'd trust more to handle this kind of information.
>Epidemiology at this level is going to have transformative effects...
The transformation is already underway. When Apple delivered Google Maps on iPhone with its pulsing blue dot, those with the technology instantly leapt ahead of those without it. I wonder how much stress has been relieved just from people knowing where they are.
I agree the technology can be misused, even to oppress. But this is true with any tech.
My concern is more about the disparity of those who have this tech and those that don't. The enhancements could start to show in a matter of years and quality of recommendations could coalesce into what will look like a new plane of daily living.
Arms race: help evolve 802.11bf with deep understanding of risk management. Sell souped-up localization and CSI countermeasure devices, e.g. for industrial laboratories. Work with two celebrity neighbors on a sanctioned, comedic public demo of seeing through home and business walls, in advance of Wi-Fi 7 launch. Publicity will mean a larger market for both attack and defense products. If it's done early enough, it could materially influence the WiFi Sensing standard and thus devices racing to be the first to support not-yet-official WiFi 7. There are legitimate use cases, but consent and bounded scope are critical for mass-market acceptance. Non-naive experts can help.
> Imagine that someone wants to illegally track the position of a person inside a laboratory, for instance to measure how much time is spent doing different activities at different desks, as depicted in the upper picture. How much effective can this attack be? ... With CSI-MURDER, the localization becomes impossible because results will seem random, thus preserving the person privacy without destroying Wi-Fi communications
Presence detection. Make a $50 device I can run on a battery/Power over Ethernet/etc.
Make it all locally manageable, and make it work with HomeAssistant, so I don't have to deal with it "calling home" or shipping data outside my LAN.
I'm not a huge fan of the current crop of presence detection, and mmWave seems like just the thing. Eventually, person detection would be fantastic, but it MUST NOT call home outside the LAN. I don't want that junk going on the internet in any way, shape, or fashion.
I need a presence detector that's able to detect children near my pool.
I need a presence detector that can, via gait detection (or other), detect when a specific person crosses a threshold (e.g. autism or Alzheimer's with a tendency to run off)
As a guy that works on old cars (and drives them fast) that will never have OEM support for e.g. ADAS, I'd love to have access to a small, discreet (i.e. not visually excessive, but "hideable"), modular set of sensors that could be used to build e.g. ADAS/parking sensors/(pipe dream) automated driving.
I'm sure the OEMs would be interested, but I'm looking for something I can retrofit.
This is a fantastic idea, i.e. refactor the timeline of automotive electronics to backport modern technology options into old car branches, retaining supply chain control in the hands of a individual human mechanic-sysadmin-owner. Reduce the false conflict between technological convenience and security/privacy.
Very dumb question: If mmWave is operating at 60GHz, how would it be possible for processors (which are operating at less than 1/10th of that clock rate) to process that sampling rate?
In summary, 60 GHz is the CARRIER frequency (the central frequency) around which a modulated signal is centred or otherwise aligned. The modulated signal BANDWIDTH (channel width) is where the signal information is encoded and is going to be measured in Mega-Hertz (MHz).
E.g. for 802.11a/n/c the CARRIER is in the 5.x GHz band but the modulated signal BANDWIDTH will be one of 20, 40, 80, 160 MHz around a central frequency [0]. The 'base' 20MHz bandwidth at best performance (signal to noise ratio) will be able to modulate (encode) at a raw 54Mbits/second.
Intel publish [1] an easy-to-read set of tables that show the relationships between standards, frequencies, bandwidths, MIMO, and modulation schemes.
[0] simplified - actually there is a (frequency-hopping or direct sequence) spread-spectrum method
The same way you can receive/transmit a 5GHz wifi signal using devices running at a fraction of that clock rate: most of the critical high-frequency signal processing happens using analog electronics. The actual sample rate of the digitized signal is much lower.
Yes. The relevant information may be contained in a narrow swath of bandwidth riding within that 60GHz. Say you use 60GHz as a carrier but only care about changes happening below 10kHz. You can separate out that from the returned signal and only process 20k samples/second.
The other comments are good answers to your question, but here's a fun little fact: if you actually want to directly read a 60Ghz signal, you actually need to sample it at 120Ghz to get it all! Good old nyquist.
> In October 2019, Soli shipped in the Pixel 4, marketed to consumers as "Motion Sense." It provided users with a faster face unlock along with some basic touchless gestures like controlling music. Google must have decided it was still too early as they haven’t included Soli in any of the Pixel phones since.
Hm, the article makes it sound as if mmWave radars are a fairly new addition to cars. But 77 GHz radar sensors have been in cars since over a decade now, thanks to suppliers such as Bosch and Continental.
A cryptopunk would say just the opposite: that privacy is a technical problem that, properly solved, alleviates us of political concerns.
Both views seem to miss the mark. The crypto folks see mathematical certitude while ignoring the dynamics of the social/legal context in which it must operate. By contrast, those who argue that "artifacts don't have politics" would absolve technologists of an ethical duty to consider how their products impact society.
In short, privacy required both technical and legal protections as well as a society that demands it as a fundamental human right
It is a technological problem too. Telegram has a much better UI/UX than Signal, and not having end-to-end encryption by default is part of the reason.
technology is a political/legal problem. the printing press, gunpowder, the internet. technology has always been in the middle of political/legal problems.
man that mmwave stuff built into phones and watches looks great.
Would love to see that on my watch and Galaxy Fold3... less touching but gesture control. Want.
Ripple crypto doesn't own that name in any domain other than crypto so it can be used in other applications without an issue. The name makes a great deal of sense in this context since mmWave uses tiny waves; otherwise known as ripples.
If we won't see it coming, and as many commenters have pointed out, it will indeed comprise personally-identifiable information - despite the (absurd?) claims - how does this all fit with the GDPR right to be forgotten?
With machine learning improvements, identification and more are possible. From a 2021 paper on IEEE 802.11bf Wi-Fi Sensing, https://arxiv.org/pdf/2103.14918.pdf
> Indeed, it has been shown that SENS-based classifiers can infer privacy-critical information such as keyboard typing, gesture recognition and activity tracking ... since Wi- Fi signals can penetrate hard objects and can be used without the presence of light, end-users may not even realize they are being tracked .. individuals should be provided the opportunity to opt out of SENS services – to avoid being monitored and tracked by the Wi-Fi devices around them. This would require the widespread introduction of reliable SENS algorithm for human or animal identification.
Prior discussion of WiFi Sensing:
Jan 2022, https://news.ycombinator.com/item?id=29901587
May 2021, https://news.ycombinator.com/item?id=27123493
400+ research papers on wireless activity sensing, showing steady improvement in detection techniques, https://dhalperi.github.io/linux-80211n-csitool/#external
Today, specialist devices or targeted attacks can monitor human activity through walls and closed doors. But that's a world away from the ubiquitous transparency scheduled for commodity WiFi 7 in 2024. If regulators or lawyers don't step in, homes and some businesses may need RF shielding.