Hacker Newsnew | past | comments | ask | show | jobs | submit | Drip33's commentslogin

Another example of a great sub:

/r/kidneystones

sad face noises


I have access to some marketing data and for fun,

select * from mobile_location where latitude between 38.88778433380732 and 38.891917997746894 and longitude between -77.01269830654866 and -77.00613225870377 and epoch_timestamp between 1609954200 and 1610067600

Returned quite a number of mobile devices accurate to the meter. Was fun to see which phone was in which room or blade of grass of the building. I'm not even American.


Yeah, that raises a whole bunch of other ethical questions though, like why you’re able to do this, what access to PII you have and why you’re able to run queries like this on a Saturday.

I hope your employers keeps track of stuff like this.


My guess is that this being possible would be the norm, rather than the exception. And that keeping track of individual queries is not.


Well, that sounds worrying.


What do you think, most such data comes from Apps, they just ask for permission and give you no rights at all. Location data is a such business, for traffic or business. Look at this example from Thasos. The look many hours and shifts are in companys and sell the data to traders. http://thasosgroup.com/blog/thasos-data-tesla-wsj/


Just because something is being done does not mean it’s ethical. Fortunately, this is illegal in Europe.


I did not say, I think it's a good thing or it's ethical. It's just the truth. But we all accept it somehow, so Google or wherever can make us good traffic warnings. Here in Switzerland the location data is also sold by the telco companys, you have to opt out by yourself (at least by Swisscom).


Might be illegal in Europe but it still happens [1].

[1] https://nrkbeta.no/2020/12/03/my-phone-was-spying-on-me-so-i...


Does your company log every select query you do on your prod db? If yes, does it automatically raise alerts when PII is accessed?

I'm genuinely asking, I have never seen such a setup.


As someone who works for a tech giant I'd like to provide some input here.

1. Absolutely yes for ad hoc queries and there's a wealth of logging and privacy features built into all of our tools.

2. Absolutely yes and those queries are audited. For any query that matches some heuristics it'll warn us with a big scary message to make sure the query is legitimate work.

For the query linked above about checking who was around a location at a given time I'd probably be fired before the query completed. It's a pretty comfortable job but they don't mess around with user data, it's zero tolerance if it's abused and even if there was never a warning message your ass is still out the door when they catch you.

For all the flak tech giants get around PII I think it's horrifying that smaller players can still get hordes of sensitive data and yet have basically no safeguards to prevent misuse like querying whatever some internal engineer wants to look at.


This is how it should be.


It does not. Maybe it should though. I know a company I worked at logged everything that was done or accessed within our Salesforce instance, maybe something needs to be done like that rather than allowing folks to run arbitrary sql queries against the database.


This is standard practice at large companies with proper data controls. Usually they have a "break glass" feature for emergencies and don't let any humans access PII without a damn good reason.


99% of "marketing" companies are shady fly-by-night ops, are you expecting any standard procedures from there?


> to some marketing data

I guess this part answers the questions of "why", "what" and "does the employer care". Well, at least these people now know what they did "not" have to hide the whole time.


It’s precise to the meter but is it really accurate to the meter?


You should be fired.


What if I own the company and it's an acceptable Terms of Service use? What if I'm unemployed? Should I still be fired?


Interesting. Who sells marketing data like this?


I found this page useful to test how code I've written handles unusual characters.


My powered on phone was 0day'd by Cellibrite according to internal Law Enforcement emails in disclosure and my powered down phone was not supported at all.

Edit: The cracked phone has been in my possession for a while now and I've only powered it on twice for mere minutes since getting it back if there are any phone experts around that want to investigate.


Mind sharing the phone make/model and when this happened?


Galaxy S7 - online and cracked, Galaxy S4 - offline and not cracked ~2017-2019


Seems to be a common thread - if you want your phone to remain secure, it must be powered off before seizure. This way, there is no encryption key in RAM. As long as the key is sufficiently strong (random passphrase, not a 4-digit PIN), you have a reasonable guarantee that they will fail to crack it.


What about the iphone POWER+VOLDOWN thing that flushes the biometric keys and requires the pin?


Looking at the iOS platform security document ([0] page 68), apps use NSFileProtectionCompleteUntilFirstUserAuthentication by default, which keeps keys in memory after the first unlock, regardless of the Power + Volume Down lock. If an app opts into NSFileProtectionComplete, I believe the keys are purged from memory upon locking.

[0] https://manuals.info.apple.com/MANUALS/1000/MA1902/en_US/app...


More obscure phones might not have support, but that doesn't mean they can't be opened with more effort.

When off, you're relying on the strength of the FDE passphrase and whatever key strengthening they implemented, and that the OS didn't leave some key fragments somewhere (accidentally on flash, which would be very bad, or remanent in memory if it has only been off for a short period).

Using a long alphanumeric (>12 random, >20 passphrase), not installing random apps, keeping it patched and keeping it powered down is probably the best you can do. I wouldn't use the baseband comms if I could avoid it, just a huge 4G attack surface.


Android has a 16 character limit for your password. Or at least it did as of Android 10, not sure if that's changed.


If you root your phone, you can set your FDE passphrase to whatever you want while keeping a usable shorter unlock code. My phone's FDE passphrase is 26 characters long.


Even better, if you have a reasonable worry about losing control of your phone with data that is on there, you should just get a second one that contains nothing sensitive or personally identifiable, uses wifi only, no sim.

This is even good practice if you go to public places where you can get your phone stolen.


What sort of complexity was the S7's passcode? 4 digit pin? more than 4 digits? short password? longer passphrase?

Also, which Android versions? Stock Samsung or an alternative rom? (My S6Edge is "stuck" on Android 7 without replacing the OS with a non Samsung alternative. My S4 is running a much newer Lineage Android version...)


?l?d mask with I think 9 characters using default Android OS with updates. Both phones I think were minimum Android 5.0+ because that is when they switched to Scrypt to store password/encryption keys.


Thanks.


What phone models?


This address has no known private key https://www.blockchain.com/btc/address/1BitcoinEaterAddressD...

Sure someone could randomly guess it but not likely. You don't need to have or know a private key to generate a valid checksum bitcoin address that can be sent to.


I just created several trillion “disposal” sounding addresses and started sharing the one that I guessed a private key for. There’ll be some real surprises in 2025!


If you can generate the private key to a 27 character or longer English readable Bitcoin address then you can use the same magical technology you possess to earn far more than $1 billion USD.


Previous discussion: https://news.ycombinator.com/item?id=20927197

> Also forfeiture doesn't account for the fact that you may have paid capital gains tax. I feel like they really shafted me on that one.

As someone also charged but not convicted, I learned hypothetically you also don't get to account for expenses if you lose. Meaning you could run a $10 billion gross revenue drug empire, it costs $9.9 billion to run so you walk away with $100m and you still have to pay $10 billion in forfeiture.


If the most you could lose is whatever margin you made, that wouldn't be much of an incentive to not do that in the first place and it would be quite a broken system.


What I wrote about is forfeiture which doesn't include criminal penalties. Most of the American (which I am not) federal press releases I read about have a maximum $250,000 fine per count.

Here is a random press release from justice.gov https://www.justice.gov/opa/pr/six-additional-individuals-in...

>The Sherman Act offense charged carries a statutory maximum penalty of 10 years in prison and a $1 million fine for individuals. The maximum fine may be increased to twice the gain derived from the crime or twice the loss suffered by victims if either amount is greater than $1 million. The false statements offense charged carries a statutory maximum penalty of 5 years imprisonment and a $250,000 fine. The obstruction of justice offense charged carries a statutory maximum penalty of 20 years imprisonment and a $250,000 fine.

These fines are on top of the forfeiture so if the person in this article earned $1 million gross, $100k net they would have to repay on a guilty finding $1 million + say $500k in fines = $1.5m when they only "earned" $100k.


That makes sense, thanks for educating me :)


Correct. :-(


> Is this a CFAA violation? No hashes were cracked

Cracking hashes alone is not a crime or I'd be in jail.


weev went to jail for adding 1 to a number on a URL... They can do anything they want to you if they decide to...


It’s a little more complicated than that... He didn’t just visit URLs, but downloaded that data, saved it, and posted it up somewhere else. Regardless of the “hacking” charge, posting people’s data is taking it too far. If he had just used it as a POC, that’d be different.


Despite obvious differences both in societal import and trivial details, weev's case was like a practice run for Assange's. Not only is the government using the same playbook, but the character assassins in traditional and social media are as well.


Sure, but what was the intent?

Did you crack a hash as some part of a computer contest?

Or were you helping someone crack the hash with the intent to get access to a US DoD government computer?

The way the law is apparently written, the first is not illegal. The second would be conspiracy.


Neither but yes you're right for both.


> I guess the "modern" browser fixes this for everyone else not using a ("modern") proxy.

Only if the site owner wants it so https://hstspreload.org/


There's a lot they can do, but then it makes it impossible to search your inbox for old emails by subject, from, to etc fields.

You can reduce what an adversary sees by not including anything useful in the subject line and the only thing you can't protect is who you're speaking to and when.


The third party doctrine[1] allows the government to access your call records (and other metadata) without a warrant, but I don't think anyone's fine with that.

[1] https://en.wikipedia.org/wiki/Smith_v._Maryland


>If you can get a dump of the computer active memory you can ultimately get the decryption keys on consumer hardware

What methods are available to get a memory dump if Firewire is disabled? Feds couldn't break my encryption after ~1.5 years but my devices were all off when they showed up. Ironically the one device they did get into was a cell phone powered on but it had little evidentiary value and in one funny way was partly exculpatory.


You can freeze the computer, remove the DIMMs, and then pop them into a different machine to read them: https://electronics.stackexchange.com/questions/32189/freezi...


I thought that wasn't possible since DDR3 or 4?


Smaller capacitors will keep their charge less long, which makes this more difficult, but I understand that there are no fundamental mitigations.


The fundamental mitigation is full memory encryption using a randomly generated key that changes each time the CPU boots. That exists for some CPUs.


Well, that's not part of the ddr3 or ddr4 spec :)


Where do you store the key?


CPU registers - much harder to pull off and reattach elsewhere.


There's (T)SME.


USB-C+Thunderbolt or eSATA with a vulnerable controller. I know modern OSes have protection against driveby-DMA, but don't know if they protect against memory dump if the "user" - or attacker who gained access to a logged in system - consents.

All these attacks target decryption keys in memory, so they don't work on devices which are turned off.


I'm assuming the computer is "locked" but online, and no users consent to logging in to run anything. Will any of these methods still work if Firewire drivers are disabled?


In that case USB-C/Thunderbolt won't work, in the default config you need to trust the device and cable before anything too interesting can happen. This can be turned off via bios setting though, so there's room for misconfiguration.

E-SATA or PCIe hotplug might still work. However the former is getting less common, and the latter is uncommon in consumer mainboards.


Thunderbolt in some cases: https://thunderspy.io/


There was a neat patch to store the AES key in debug registers and do round key setup for each block. It was even faster than the normal routine. https://www.usenix.org/event/sec11/tech/full_papers/Muller.p...


You can do it via PCIe as well. [1]

1. https://github.com/ufrisk/pcileech


It's a DMA attack, not necessarily a FW attack. DMA is direct memory access, which is how the attacks worked.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: