select * from mobile_location
where latitude between 38.88778433380732 and 38.891917997746894
and longitude between -77.01269830654866 and -77.00613225870377
and epoch_timestamp between 1609954200 and 1610067600
Returned quite a number of mobile devices accurate to the meter. Was fun to see which phone was in which room or blade of grass of the building. I'm not even American.
Yeah, that raises a whole bunch of other ethical questions though, like why you’re able to do this, what access to PII you have and why you’re able to run queries like this on a Saturday.
I hope your employers keeps track of stuff like this.
What do you think, most such data comes from Apps, they just ask for permission and give you no rights at all. Location data is a such business, for traffic or business. Look at this example from Thasos. The look many hours and shifts are in companys and sell the data to traders.
http://thasosgroup.com/blog/thasos-data-tesla-wsj/
I did not say, I think it's a good thing or it's ethical. It's just the truth. But we all accept it somehow, so Google or wherever can make us good traffic warnings. Here in Switzerland the location data is also sold by the telco companys, you have to opt out by yourself (at least by Swisscom).
As someone who works for a tech giant I'd like to provide some input here.
1. Absolutely yes for ad hoc queries and there's a wealth of logging and privacy features built into all of our tools.
2. Absolutely yes and those queries are audited. For any query that matches some heuristics it'll warn us with a big scary message to make sure the query is legitimate work.
For the query linked above about checking who was around a location at a given time I'd probably be fired before the query completed. It's a pretty comfortable job but they don't mess around with user data, it's zero tolerance if it's abused and even if there was never a warning message your ass is still out the door when they catch you.
For all the flak tech giants get around PII I think it's horrifying that smaller players can still get hordes of sensitive data and yet have basically no safeguards to prevent misuse like querying whatever some internal engineer wants to look at.
It does not. Maybe it should though. I know a company I worked at logged everything that was done or accessed within our Salesforce instance, maybe something needs to be done like that rather than allowing folks to run arbitrary sql queries against the database.
This is standard practice at large companies with proper data controls. Usually they have a "break glass" feature for emergencies and don't let any humans access PII without a damn good reason.
I guess this part answers the questions of "why", "what" and "does the employer care". Well, at least these people now know what they did "not" have to hide the whole time.
My powered on phone was 0day'd by Cellibrite according to internal Law Enforcement emails in disclosure and my powered down phone was not supported at all.
Edit: The cracked phone has been in my possession for a while now and I've only powered it on twice for mere minutes since getting it back if there are any phone experts around that want to investigate.
Seems to be a common thread - if you want your phone to remain secure, it must be powered off before seizure. This way, there is no encryption key in RAM. As long as the key is sufficiently strong (random passphrase, not a 4-digit PIN), you have a reasonable guarantee that they will fail to crack it.
Looking at the iOS platform security document ([0] page 68), apps use NSFileProtectionCompleteUntilFirstUserAuthentication by default, which keeps keys in memory after the first unlock, regardless of the Power + Volume Down lock. If an app opts into NSFileProtectionComplete, I believe the keys are purged from memory upon locking.
More obscure phones might not have support, but that doesn't mean they can't be opened with more effort.
When off, you're relying on the strength of the FDE passphrase and whatever key strengthening they implemented, and that the OS didn't leave some key fragments somewhere (accidentally on flash, which would be very bad, or remanent in memory if it has only been off for a short period).
Using a long alphanumeric (>12 random, >20 passphrase), not installing random apps, keeping it patched and keeping it powered down is probably the best you can do. I wouldn't use the baseband comms if I could avoid it, just a huge 4G attack surface.
If you root your phone, you can set your FDE passphrase to whatever you want while keeping a usable shorter unlock code. My phone's FDE passphrase is 26 characters long.
Even better, if you have a reasonable worry about losing control of your phone with data that is on there, you should just get a second one that contains nothing sensitive or personally identifiable, uses wifi only, no sim.
This is even good practice if you go to public places where you can get your phone stolen.
What sort of complexity was the S7's passcode? 4 digit pin? more than 4 digits? short password? longer passphrase?
Also, which Android versions? Stock Samsung or an alternative rom? (My S6Edge is "stuck" on Android 7 without replacing the OS with a non Samsung alternative. My S4 is running a much newer Lineage Android version...)
?l?d mask with I think 9 characters using default Android OS with updates. Both phones I think were minimum Android 5.0+ because that is when they switched to Scrypt to store password/encryption keys.
Sure someone could randomly guess it but not likely. You don't need to have or know a private key to generate a valid checksum bitcoin address that can be sent to.
I just created several trillion “disposal” sounding addresses and started sharing the one that I guessed a private key for. There’ll be some real surprises in 2025!
If you can generate the private key to a 27 character or longer English readable Bitcoin address then you can use the same magical technology you possess to earn far more than $1 billion USD.
> Also forfeiture doesn't account for the fact that you may have paid capital gains tax. I feel like they really shafted me on that one.
As someone also charged but not convicted, I learned hypothetically you also don't get to account for expenses if you lose. Meaning you could run a $10 billion gross revenue drug empire, it costs $9.9 billion to run so you walk away with $100m and you still have to pay $10 billion in forfeiture.
If the most you could lose is whatever margin you made, that wouldn't be much of an incentive to not do that in the first place and it would be quite a broken system.
What I wrote about is forfeiture which doesn't include criminal penalties. Most of the American (which I am not) federal press releases I read about have a maximum $250,000 fine per count.
>The Sherman Act offense charged carries a statutory maximum penalty of 10 years in prison and a $1 million fine for individuals. The maximum fine may be increased to twice the gain derived from the crime or twice the loss suffered by victims if either amount is greater than $1 million. The false statements offense charged carries a statutory maximum penalty of 5 years imprisonment and a $250,000 fine. The obstruction of justice offense charged carries a statutory maximum penalty of 20 years imprisonment and a $250,000 fine.
These fines are on top of the forfeiture so if the person in this article earned $1 million gross, $100k net they would have to repay on a guilty finding $1 million + say $500k in fines = $1.5m when they only "earned" $100k.
It’s a little more complicated than that... He didn’t just visit URLs, but downloaded that data, saved it, and posted it up somewhere else. Regardless of the “hacking” charge, posting people’s data is taking it too far. If he had just used it as a POC, that’d be different.
Despite obvious differences both in societal import and trivial details, weev's case was like a practice run for Assange's. Not only is the government using the same playbook, but the character assassins in traditional and social media are as well.
There's a lot they can do, but then it makes it impossible to search your inbox for old emails by subject, from, to etc fields.
You can reduce what an adversary sees by not including anything useful in the subject line and the only thing you can't protect is who you're speaking to and when.
The third party doctrine[1] allows the government to access your call records (and other metadata) without a warrant, but I don't think anyone's fine with that.
>If you can get a dump of the computer active memory you can ultimately get the decryption keys on consumer hardware
What methods are available to get a memory dump if Firewire is disabled? Feds couldn't break my encryption after ~1.5 years but my devices were all off when they showed up. Ironically the one device they did get into was a cell phone powered on but it had little evidentiary value and in one funny way was partly exculpatory.
USB-C+Thunderbolt or eSATA with a vulnerable controller. I know modern OSes have protection against driveby-DMA, but don't know if they protect against memory dump if the "user" - or attacker who gained access to a logged in system - consents.
All these attacks target decryption keys in memory, so they don't work on devices which are turned off.
I'm assuming the computer is "locked" but online, and no users consent to logging in to run anything. Will any of these methods still work if Firewire drivers are disabled?
In that case USB-C/Thunderbolt won't work, in the default config you need to trust the device and cable before anything too interesting can happen. This can be turned off via bios setting though, so there's room for misconfiguration.
E-SATA or PCIe hotplug might still work. However the former is getting less common, and the latter is uncommon in consumer mainboards.
/r/kidneystones
sad face noises