So for context I looked at the papers behind the scenarios.
- The heart rate sensing is done on a smart watch, not a phone, and needs data from the actual heart rate sensor every couple of days [1].
- The breathing rate is determined from a phone put on the breast or the abdomen [2]. Not really a threat vector in that form.
- The audio stuff is incredibly impressive [3], but it doesn't look like they can reconstruct text with meaningful reliability, it's more about identifying the person or at least the gender of the person on the other end of the line.
The location and activity detection scenarios seem the most credible to me, but the for targeted attacks the audio reconstruction might also work. The other two don't really seem credible to me yet, but good to be aware of them.
One of my pet peeves is that these articles never report the a priori distribution of labels that are being predicted. Guessing whether you're at sitting vs walking with 90% accuracy is not very useful if most people are usually sitting. Actually, the article here doesn't even report the accuracy: neither the stdev error for scalar predictions, or the precision/recall and number of classes predicted for ordinal data. Actual results would go a long way to making the article meaningful.
There's a growing number of these "computers can now predict..." articles, like "one email from you can tell your mood" or "your choice of snap filter reveals your age." Basically, in practice anything can predict anything, usually slightly better than random, so there's an infinite number of articles to write that sound shocking. If I had a bit of training data, I can easily write a script to predict everyone here's salary based on their comments using bag-of-words, and I bet I can do better than random guessing.
For what it's worth I did a fair bit of research on using accelerometers for heart rate and respiratory rate determination (https://ieeexplore.ieee.org/abstract/document/5504743) and while placement on the abdomen/chest is preferable, if the subject has a high excursion and low adipose tissue I was in some instances able to get a good signal from a device worn in a trouser or jacket pocket (especially when sitting). Never published that because it wasn't useful in a medical setting but could be of interest if you're recording it opportunistically.
Using the accelerometer/gyro data, you can also guess the password from the device with quite good accuracy(it depend) if you type in your hand and not on a flat surface.
It does, but I unlock it with FaceID rather than a password. Cameras recording me typing my bitwarden master password are a bigger security concern for my password security already.
Wish there was an option to disable the magnified, per-key, onscreen keyboard bubbles when entering a password. Provide an option to show the password on entry or not, same as desktop input fields for passwords. But don't magnify each character and invert the screen to make it easier for camera imaging.
There is, at least on iPhones. But it is for all keyboard input, not just passwords. Go to Settings -> General -> Keyboard, toggle Character Preview to off.
You may create App such as a (kind of)"navigation" app.
But I'm not sure about collection data while screen is locked and password is required. Need to play with it.
I would guess the vast majority of people who need to set a PIN for the CVS app or their banking app or whatever use the same PIN to unlock their phone.
If they’re creating a PIN in the app, the developer could just store that in plain text more easily than trying to derive it from the accelerometer. I think you’re right that most people probably use the same PIN for their phone and apps, but then the app maker has easy access to the PIN because the user gave it to them
Health definitely does (and likely needs to for much of its functionality), but it’s first party so you’ve pretty much already trusted the vendor with that data.
Future Interfaces Group at CMU has done a lot of interesting research where they can infer a large amount of enviromental context just from accelerometer/gyroscope alone. http://www.figlab.com/
>Facebook reads the accelerometer all the time. Facebook actually shows a support prompt if a shake event is detected across the app. This could be one reason why Facebook reads accelerometer data.
No. Shake gestures are handled at the OS level and you only get began/changed/ended callbacks. Raw accelerometer data requires the CoreMotion framework, and it’s a lower level API. They are definitely using it for something else.
This is also confirmed by this:
>The prompt has an option to switch this feature off. However, switching it off doesn’t stop the app from reading the accelerometer.
I've heard unreliable and whispered rumors (from the tech grapevine) that the accelerometer was a bot-farm mitigation strategy.
The idea is that a bot farm with thousands of phones on racks won't have some signature that the accelerometer should see when the screen is tapped (for example, when typing a message or hitting a like button).
It's probably being used for multiple purposes, regardless of the original intent. Recall that FB originally collected phone numbers for 2FA, but years later decided to use the information for ad targeting[1]. Not that Facebook is unique in this regard, of course; my own experience at companies which collect various bits and pieces of customer data is that, once the data is in a database somewhere, people are pretty good at finding excuses to use it.
It's been a few years but I do recall adding a framework that claimed to use the accelerometer to help with bot detection. I believe it was PerimeterX.
It uses essentially no battery. IIRC, it uses so little that even when your phone battery runs out and the phone turns off, the accelerometer keeps counting your steps.
That’s what you’d see if the raw accelerometer data was continuously recorded and the actual step math was done in batches on that data possibly along with other computations. If separate hardware was already doing all the math up front it’d make no sense to not pull it once every 10s or something, and there certainly wouldn’t be visible lag between opening the health app and the distance / steps numbers updating
Entire reason to have separate hardware is so that you don't have to wake up the main processor. If you wake up the main processor every 10 seconds you've completely lost the advantage. The idea is if someone's phone is asleep in their pocket you need to wake it up occasionally to check for notifications and so on, but you don't want to be waking it up constantly.
Even if accelerometer data is free, IPC’ing the data to the app consumes a small amount of power. It adds up over time. If it’s not needed, the app shouldn’t subscribe to it.
If apps were worried that phoning home with the data they recorded used significant battery, we might be in a different position in terms of how frequently they do it.
Several such "motion chips" have onboard motion processing. As a random example the MPU-9250[1], a 9-DOF IMU, has a low-power pedometer feature which can keep step count while host processor sleeps or is offline.
Even without onboard processing it's a very obvious move to offload it to a tiny microcontroller. Apple actually started by using a physically separate NXP micro that they branded "M7". Now it is of course on the main chip, but still a separate Cortex-M core.
It can't be that intensive considering Apple's done that parallax homescreen effect since like iOS7. Also when you're in the Safari tab overview, it reads the gyroscope to do a 3D effect where the tabs tilt forwards/backwards. Probably a few other areas I'm not thinking of too.
> Now, if this social app is reading accelerometer data on your phone as well as the passenger’s phone, the app can easily figure out that both phones experience the same vibration pattern. ... Don’t be surprised if you receive a recommendation from the app to add this passenger as a friend.
Overall I find the article interesting but this quote is borderline tinfoily. Given the amount of noise in the accelerometer data and signal s much closer to the sensor than the bus itself - such as body movements, it would be hardly precise. Moreover the cost of doing all that research and computation as well as data transfer would hardly pay off.
Just an FYI, Apple does give its developer the ability to have the app wake up from background on-demand or at scheduled intervals to perform specific tasks. This is something that can be done as long as the phone is ON and connected to the public gateway either via cell towers or WiFi.
An example would be the Facebook app icon with notification count or the Mail app with unread emails count. This counter is updated based on background processing of Fb or email notifications.
I don't know exactly how it works, but I know notifications are their own separate concept that gets handled by the system and doesn't require the app to be running arbitrary code in the background
There is also the "allow background activity" permission though, which I'm not sure the bounds of, but you can disable it
> The question is whether anyone is really using this technique.
No. This is not at all the question. A possible privacy breach is a serious issue, regardless of whether there is a working POC. If this data is somehow compromised, a stalker could get your identity just by following you a few minutes on the street.
I recall that Snapchat explicitly mentioned accelerometer data in their privacy policy - why would you mention it unless you did or have plans to use accelerometer data in the future?
I don't think it would require significant bandwidth; the data is just integers which can be collected, compressed and uploaded asynchronously (as part of another heavy upload such as someone sending a picture). The analysis part could be similar to how Shazam works, but that can be done on the server side so on-device performance isn't a concern.
A single simple would be three doubles and a timestamp (64 bit int). Accelerometer max update rate is 100hz. So all possible samples each second would be 3200 bytes. So bandwidth doesn't appear to be a big issue.
Most of the aforementioned apps already have autoplaying media and/or keep the camera running in the background (officially for quicker access to it when the user opens the camera view, as it normally takes a second or so to initialize it). Seems like collecting accelerometer data is a drop in the bucket in comparison.
Vehicle-induced vibrations would result as significant (overpowering any noise) peaks on one or more accelerometer axis, and their timestamp (relative to other peaks) will precisely align with peaks on any other device on that vehicle.
As a layman I think you can just sum all axis for each person, overlay the resulting track with everyone else's and try to find a position where enough peaks correlate (constrained within a 5 minute timeframe from the on-device timestamp to account for clock drift while limiting the search space) and that should work well enough.
I'm sure the sociopaths working for Facebook will have a smarter way of doing this that's even more accurate.
I think there would be far too much noise. The accelerometer data is going to be different based on a number of factors:
Orientation? Is the phone in your hand? Your pocket? Your purse? Front seat / back seat?
By way of comparison I worked on some automatic breaklights that trigger when an accelerometer detects you slowing down (bikes, skateboards, scooters).
It turned out to be way more complicated that we expected. Hitting potholes, naturally slowing when you go uphill, taking a turn.
I've mentioned this before, but the Bosch 9-DOF sensors are accurate and have enough fidelity to track me walking through my house while sitting in the back room on a desk. That's the accelerometer. The gyros can detect music playing at the opposite end of the house assuming the speakers are big enough - phone playing YouTube, no, but sound bar, yes.
Make no mistake that MEMs sensors are really very good.
I was trying to wire something together to track what vehicles were travelling down my forest road and left the room with the data streaming, and came back to be impressed.
I think the key is that the data would be in sync. You don't particularly have to care what the vibration is or which direction, the important bit is that these patterns line up with similar patterns collected from the other phone.
A guy that I studied with had his company bought by a tech giant. Not FAANG, but a big company that you've heard of.
Their product - in production - was to map establishments such as stores and provide data about what the clients were looking at. Did you walk past the mens' shirts and turn your body? Noted. Did you stop at the condoms? Noted. This was all done with the accelerometer, if the user had a "compatible app" installed. A lot of apps carried their technology from what I understand.
I actually thought that this was a well-known use of our devices' accelerometer until I read the responses in this thread.
I don't know the whole tech stack, I only got this much information after surreptitiously bumping into him one day. They had a few dozen devs and years invested in the company - and this was in 2017 or 2018. I'm sure that there were other technologies involved, in fact I do think that both GPS and some in-store box which was already used to record passing IMEIs were involved.
I was not planning on meeting him - but we sat next to each other on the train. The previous time that I had seen him was a decade prior when he bought my modified N-Gage.
Yes, thanks. I've had this device autocomplete my own name to something somewhat insulting. I think that I'd prefer to just leave in minor typos that are readable than have an entire word replaced with an inappropriate word.
Actually a pretty good idea for a business if I'm being honest. Like, it's horrifying, I don't like that this is a thing, but it's fairly valuable data for a business.
It's possible Facebook uses the accelerometer for bot detection or click fraud detection. They could use the data to help work out whether a human is interacting with the phone or a robot (do an image search of "click farm" if you want an idea of why this might work).
Advertising and advertising fraud are possibly the worst thing that has happened to privacy in general. Advertisers fight against privacy protection so they can shove ever better targeted ads for shit we don't need down our throats, and fraudsters force advertisers to engage middle men that resort to even worse tactics than advertisers do just to clamp down a tiny bit on fraud.
As for the click farms - I would not be surprised if the next step will be gimbal mounts that mimic an actual human's movement...
But why are people on this thread making excuses for it? I had no idea this was happening and certainly don't WANT this information to be recorded by Facebook. Do they know when I'm having sex if I leave the phone on my bed?
And given that who ever I am with might have Facebook on her phone, and it might be on my bed also...
They do both. Mass simulators with faked sensor data as well as phones on racks manipulated by mechanical digits. A phone that never moves at all doesn’t detect former but it does detect the latter.
It's still a breach of privacy unless it's properly disclosed to the user, and there's a fair question to be asked as to whether the bot/non-bot detection can classify which "bot" it's looking at, and if so whether the same can be applied to track users.
The accelerometer/gyroscope would be a great tool for spotting click-farms. Bunch of phones in the same location, in the same position, and rarely move...something shady's happening.
One of the articles posted to HN recently covered this by talking about faking the accelerometer data to make it look like the phones were being randomly moved, so I guess the bot guys are already on top of this, from both sides of the war.
I wonder if there is a market for a smart phone with a hardware disconnect switch for all of its sensors? Mic, accelerometers, camera, ambient light, etc?
I'd love to be able to flick a switch and just disable everything. I'm paranoid about my devices listening to me without my permission. The only issue is that for such a feature to be useful it probably couldn't disconnect everything, wifi and mobile data would probably need to remain software switches for example.
> I'm paranoid about my devices listening to me without my permission
"Your devices" aren't listening to you. What is listening to you is hostile third party software that you've been goaded into running on your devices (whether by javascript, apps, chipset, or as part of the OS), that has been insufficiently sandboxed. Hardware switches are just mitigations for an insecure OS that violates its fiduciary duty to the user, and a less good solution than having those capabilities built right into the OS. If sensor switches became popular, then hostile apps could just refuse to work when those sensors were off - just as hostile apps will refuse to work if you do not grant them requested permissions. Whereas a real user-representing OS would allow one to designate that an app should receive synthetic data for any sensor - eg set a fixed "GPS location" and then add some plausible sounding noise so that an app couldn't tell it apart from a stationary phone.
But what is really needed for sustainable user privacy is user-representing software that talks to adversarial counterparties solely through well-defined protocols. This isn't necessarily workable for new innovations, but for all well established technologies (messaging, pictures, video chat, social networking, message boards, etc) there should be Free clients that interoperate with the proprietary systems. Much has been said about "breaking up" Big Tech to constrain their power, but mandating such interoperability would be a much better approach to antitrust.
> I wonder if there is a market for a smart phone with a hardware disconnect switch for all of its sensors? Mic, accelerometers, camera, ambient light, etc?
Thanks for the link. This looked too good to be true until I clicked the order now button... $1,199...
I normally go for budget smart phones in the range of $100 - $200. I'd pay more for something with decent build quality and a privacy focus, but $1,199 is a little pricey for me.
> It has about as powerful hardware wise as Purism
It really isn't:
The i.MX 8M Quad is better than the Allwinner A64: 30% faster CPU clock speed, 140% faster RAM standard, 140% 21 better OpenGL performance, USB 3.0 and support for higher resolution cameras.
Even if it was 300$ it would still be a terrible phone. It's extremely inefficient and battery life is hilariously bad, the device itself is advertised as an extremely secure device but it is actually not secure at all, you can't even easily update the modem firmware. The company itself has a history of lying and their CEO is a habitual liar. The PinePhone, while not secure either, provides basically the exact same thing at a fraction of the cost (150$) and slightly weaker specs. Pine64 is now making a PinePhone Pro which is significantly faster than the Librem 5 at a fraction of the cost so why even bother with Purism at all?
> The PinePhone, while not secure either, provides basically the exact same thing at a fraction of the cost
This is not the exact thing at all. Apart from huge differences in the performance [0], Pine64 does not develop any software and most Pinephone users are using Phosh developed by Purism. Linux phones can't be sustainable without professional developers, just with volunteers.
But Pinephone's switches are not easily accessible: You can't switch your microphone on while receiving a call (unlike on Librem 5). Also, no lockdown mode for sensors.
IMO: the software on the PinePhone is significantly more trustworthy than most devices (as long as you're not careless or install non-free apps.)
The switches are there more for the sake of completeness but I would just as soon trust muting the microphone in pavucontrol.
someone should just 3d print a back case for the pinephone that exposes them. Or better, instruct on how to cut the current case, and 3d print a lever system that matches the hole, for easily action on the switches from outside.
You can't really trust HW switches either. Someone who has physical control over the phone for a while can easily short the switch so that it's basically on all the time no matter the position. (they can do it at the same time they'll be installing the SW backdoor)
It might be meant as a joke but I am so tired of every comment that seems to imply that trying to do anything is pointless:
Yes, a significantly advanced adversary can always get your communications if they want, but every time someone does choose the more secure option it raises the bar for them.
What's the point of security measures if you just dismiss their weak points?
"FBI is comming for me, but I have my HDD encrypted..." Well, how well did HDD encryption serve that guy who the FBI just pulled the turned on/unlocked notebook from before he noticed what's going on and was able to react? He might just as well not bothered, when he knew "FBI" is in his threat model, and didn't account for this obvious attack in any real way.
If you had instead written something along the lines of "for those who have mighty adversaries and who actually need this for their own security, one should be aware that things like hardware switches only go so far" and then an explanation.
Instead you wrote it in a way that I and probably a lot of others took to mean: even hardware switches doesn't matter.
Sadly, the camera is still a potato, relatively speaking, compared to ie Pixel 6 Pro or S21 Ultra.
And the camera is pretty much the main criteria of choosing a phone for me, because I like the outdoors a lot, and I take many photos (and I can't get myself to carry around a 'real' camera - or getting into that whole topic.)
Librem 5 is basically a scam at this point, many people who ordered it 4 years ago still haven't received it. It barely qualifies as a "mobile phone" with how bad it's battery life and efficiency is. Plus it's 1200$. Hard Pass.
Not super related, but my Physical Therapist was super impressed that my phone could show my left/right imbalance ("walking asymmetry" in iOS health).
It's been a pretty good measure of my progress in recovery post-surgery. She had no idea this even existed, and then had a second patient point it out to her just a week or two later. I imagine Android has similar features (or 3rd party apps can be installed to do so).
It's wild what new data can be used for fingerprinting. [1] describes using magnetic signals to fingerprint a device. [2] describes identifying inputted text from CPU interrupt data.
I wouldn't be surprised if health data can also be used to fingerprint users, even across devices. I wonder what lower level runtime information (e.g. CPU interrupt data) is available to apps.
"Sensors permission toggle: disallow access to all other sensors not covered by existing Android permissions (Camera, Microphone, Body Sensors, Activity Recognition) including an accelerometer, gyroscope, compass, barometer, thermometer and any other sensors present on a given device. To avoid breaking compatibility with Android apps, the added permission is enabled by default."
On the other hand, the nature of Apple’s BDFL ownership means that they could very easily add a permission prompt in an iOS release, allowing them to default to disabled, then tell app developers to update their code or go to hell. They’ve done it before with other APIs.
In fact they’ve already done this on the web: the DeviceMotion API is behind a permission prompt. I’m surprised the same isn’t the case for apps.
IIRC Apple locked down the Contacts API precisely because giant app developers were abusing it. And look at the recent ad tracking stuff that Facebook hates: I don’t think Apple is all that afraid of angering developers, big or small.
The user has to perform an action to initialise the Generic Sensor API, just as they do with sound. That's just a click or a keypress, but you can't get sensor data without it.
Sure, but that's still a user action. The point is that the browser can't read anything simply by the user landing on the page - the user has to do something to enable it, even if they're not aware of the consequence of that action. This stops most malicious usage.
Apple seems to track the accelerometer data in the background. (Why health app can tell you about steps and stairs...)
But beyond that data leakage is an interesting problem.
10 years ago I started working at a company that did home power monitoring. We used my bosses house to test. When he went on vacation you could clearly see it in the power use. The daily rhythms of a home (laundry on wed, out late on certain nights ), when they cooked. It all had become very apparent under the guise of just monitoring how much power you were using. We switched to selling to businesses shortly after which was much better. Though those that wanted to generate as much as they used and monitored it were interesting and provided good feedback on the product.
Our Big Boss's said his wife got a little aggravated with him, when he noted she had come in early (you can keep your toys just don't talk to me about it), and he noted his house cleaners operated by turning on all the lights in his house, and turning them off only when finished cleaning a room...
Its a weird world, and the data you put out there might say more than you think.
Its not just overall electricity use over time, but by doing high frequency spectral analysis, they can identify the type of load, eg. your washing machine is running, or your oven is on.
That was just starting to happen when I left. We would just put a current transformer on the mains and each circuit and label it to try and glean details. It was a difficult install, and this new kind of install is way easier .. We wondered how well it could separate out the individual circuits.
The most blatant (mis)use of this was Uber using vibration data to figure out which drivers were getting notifications from the Lyft app while they had Uber running.
My favorite trick is turning on the spectrum histogram history graph and placing the phone on a computer with a spinning HDD - you can trivially determine the RPM of the HDD from the spike in the graph just by eye.
Huh, the spectrum goes to a few hundred hertz (edit: 250 to be precise, on Android 11, Samsung high end device (but second hand, the price new is ridiculous)). I have indeed measured a few 50 Hz movement before and was already surprised that worked properly, but the scale doesn't go to even one thousand. How do you read 5400/7200/... rpm? The magnetic spectrum is even worse, going until 50 Hz. Or are you talking about the audio spectrum? I wanna test it now but I don't have hard drives on hand anymore...
7200 rpm is 120 hz and 5400 rpm is 90 hz. You should see spikes at these frequencies on the acceleration spectrum tool. The history view makes these spikes very apparent.
Is this why the M1 Pro/Max MBPs also come with an accelerometer and gyroscope [1] even if it doesn't come with mechanical hard drives? Because Apple needs to support Catalyst apps?
Looking at https://stackoverflow.com/questions/7829097/android-accelero... it seems that this could be doable - generally it looks like these devices are too noisy for straightforward inertial navigation, but constraining the problem to how far along your train is in a tunnel with occasional wifi is much more possible.
Does anyone have any examples of this stuff working out in the real world? Particularly curious about the "we used accelerometer data" part of it to infer that these two individuals were in the same place.
The flip side of this argument is that having to explicitly allow accelerometer use is a bit of a pain - especially on the web where it's quite nice to just be allowed to make your design react to device motion without having to ask.
Maybe they could just reduce the sensitivity slightly to prevent things like speech recognition working (though I am skeptical of that working in the real world)
Originally it wasn’t behind a permission prompt so you could do the little design flourishes the OP described. But now you have to request access. Orientation change events (i.e. portrait to landscape) remains accessible without permissions, though.
An example I’ve seen (I forget where but hey, it won’t work now anyway) is to move some background elements to create a parallax effect, like iOS itself does:
I’m not trying to make out that it’s a crucial feature or anything but it’s a useful case study in permission gating things: it’s small enough that you’d never trigger a permission prompt to ask to do it, so you just don’t instead, and lose a class of nice subtle design flourish.
Just FYI for anyone wondering: I believe this parallax effect specifically uses the UIMotionEffect API. It’s designed to be a simple and straightforward way to implement these kinds of effects without the developer having to worry about cleaning up the raw, noisey motion data.
It has a few limitations/features. I don’t think it necessarily always runs at the screen refresh rate. It will also “self level” after a while. So if the user rotates their device and this causes a UI effect, if they hold their phone still the effect will reset after a few seconds.
Default permission for all apps to use accelerometer, recognizing sounds from the accelerometer? This would explain the situation when I talk about something nearby my iphone and then relevant ads appear in the internet browser on the device, or within local network.
The microphone also explains this. I keep a tight reign on which permissions I give apps. I rarely get accurately targeted ads. Last week I suddenly noticed that the ads I was seeing on Facebook were matching the conversations I'd had that day. I immediately knew an app had been eavesdropping on me. Turns out I had given Instagram microphone and camera permission "while the app is in use" earlier in the day to try out an Instagram feature I was curious about, and forgot to turn it off. Even though I wasn't using the camera or mic intentionally, Instagram was still spying on me. This really should be illegal behavior by apps.
How long does the app stay in use? I have insta installed (for a business that I’m slacking on) and I open it about once a month for roughly 1 minute. Maybe I should just uninstall and reinstall it each time I need to use it.
Following up to myself: I just realized that the Background App Refresh setting for each app is a permission that you don't get prompted for. It was enabled on my phone for loads of apps. Why isn't there a prompt for this?
TV: tape over the microphones and disable wifi (as some will connect to any open wifi nearby without informing you)
Assistants: not possible, simply don't use Alexa, Hey Google, Cortana, etc.
Browsers: Go into "Privacy" settings of your browers, generally you can select "always deny" with exceptions where you need them.
Other: security cameras (e.g. Nest), doorbells, etc. there is little you can do as you have opted-in to being recorded.
---
Microphone use in advertising has been openly marketed by adtech companies for over 10 years. For example, beginning in 2012 Shazam listens for commercials around you and then faciliates concurrently displaying the same advert on your personal device to make sure you saw it. Disabling "always listen" in Shazam might mitigate this. https://www.marketingweek.com/shazam-that-ad/
Listening services can make elite money, so they're common now in places you don't suspect. One big earner is linking a commercial heard (TV, Spotify, movie theater, etc.) to a purchase made, so listeners keep a record that you heard something. For example, they can confirm a "convertion" (payout) if your phone's bluetooth/wifi/etc IDs show's up at Gap store after seeing a Gap ad on TV -- even better if it's linked to your Mastercard/VISA data showing a purchase. That data could be the difference between an adtech company (e.g. Google, AT&T, Adobe, Meta, Amazon) getting paid $0.001 vs $10 for placing an ad in front of you.
I have always suspected this and curious what TikTok is reading. For example, if I put my phone down and the video showing continues to loop, does TikTok's algo ignore my 'time watching' because it is aware my phone is on a surface vs in my hand.
This would support the theory that Facebook can listen to you and serve ads on things you talk about even if you haven't given it microphone access right? I'd assume they have enough data to be able to translate vibration patterns into language.
Well, Facebook do use accelerometer for 360 panorama view.
Could be as simple and they initialise coroMotion when the app becomes active in order to have their render react without a new initialisation each time?
One thing I thought regarding the new iOS IDFA rules is that someone could fingerprint people by using accelerometer and touch gestures to create some sort of new IDFA shared across apps.
If I remember correctly, it might be necessary to declare the capability in a manifest, but there's certainly no prompt, nor a way to opt-out of an app reading motion data in the settings.
you can "opt out" using adb by revoking the gyro permission. some non-google androids also have the possibility of revoking it with the ui, or even going as far as to promt for it (graphene os)
- The heart rate sensing is done on a smart watch, not a phone, and needs data from the actual heart rate sensor every couple of days [1].
- The breathing rate is determined from a phone put on the breast or the abdomen [2]. Not really a threat vector in that form.
- The audio stuff is incredibly impressive [3], but it doesn't look like they can reconstruct text with meaningful reliability, it's more about identifying the person or at least the gender of the person on the other end of the line.
The location and activity detection scenarios seem the most credible to me, but the for targeted attacks the audio reconstruction might also work. The other two don't really seem credible to me yet, but good to be aware of them.
1: https://arxiv.org/pdf/1807.04667.pdf
2: http://www.ijetch.org/vol8/900-M302.pdf
3: https://arxiv.org/pdf/1907.05972.pdf