I get the need to tie in to a recent big news story for exposure reasons, but I think it would be good to be more explicit about the different problems.
We have businesses that are explicitly built on violating privacy.
We have businesses provide services that require them to collect some private info. I’d put 23andme in this bucket.
We have businesses that have lax security, and actually get their systems broken into.
We have businesses that have fine security, but don’t force users to have good, unique passwords and 2FA. 23andme is in this bucket, right?
The first, we should be happy to run them out of business, like we should actively write laws that try to destroy them.
The third, we should fine them to the point where skimping on security is never a rational decision (and if that runs companies out of business, fine).
The second seems not too bad, every medical-field-related service is going to have some private info necessarily (for example), as long as they don’t exploit it that seems fine.
The fourth seems not so bad, there are all sorts of services that are not so important. I don’t have 2FA on, like, random forums and video games, who cares?
>every medical-field-related service is going to have some private info necessarily (for example), as long as they don’t exploit it that seems fine.
HIPAA is rather stringent, but we don't have anything like that for most other domains where a company might legitimately need confidential information. Instead we have the third party doctrine.
Yeah the medical field was probably a bad example, because of course HIPAA already exists. But lots of services exist that collect data that somebody might consider private as a necessary side effect of doing their thing.
The good/unique passwords stuff is crazy. Almost all of these happen because of a system backdoor or spoofing an admin. I can’t recall any big breaks we’re adding more $! To a password would have changed the outcome.
I’m pretty sure that “bad passwords” hurt in the 23AndMe case. IIRC the story is they have a service that finds your “dna relatives,” and some people had bad passwords, which meant that attackers could get into their accounts, and also discover information about anyone who’d been matched to them via this “dna relatives” service.
Maybe this sort of “dna relative” service shouldn’t exist, because anyone who opts into it is implicitly putting faith in the password safety of everybody they’ve been matched with. But, I dunno, at least I don’t see this as explicitly evil on 23AndMe’s part.
Great idea - I'm going to take personal action to stop my data from getting into the hands of corporations. Now all I need to do is...
- Stop using credit cards and taking loans, or go through every payment processor's arcane process to stop them from sharing my transaction data with advertisers
- Learn what a "tracking pixel" and "device fingerprinting" is and install browsers/extensions to block these technologies, breaking several websites in the process
- Stop receiving images in my emails
- Decompile and screen all my mobile apps to make sure they don't use something like Foursquare's Movement SDK, which allows app developers to create location-aware apps at the cost of sending their users' location data to Foursquare, who then sells it. Foursquare is just one of a half-dozen companies who do this and probably the most transparent, so I've got my work cut out for me.
- Gather a list of all major customer segment providers (companies like Neustar and Claritas) and figure out where and how they get their individual-level data (this is often a fiercely guarded trade secret) so that I can avoid making it into their data collection funnels.
- Find a mobile carrier who won't sell my cell phone data.
- Find a pharmacy who won't sell my aggregated prescription data to IQVia or one of its competitors
- Stop voting (voter rolls are often public or buyable and include your address)
- Never donate directly to a political campaign
- Only buy a house via an LLC with a registered agent
- Only deal with merchants who won't sell my transaction history
Yeah, this seems easier and more tenable than privacy legislation. Definitely.
The problem is when "just don't give personal information" is taken out of your control.
For example, I have largely cut Facebook and Google out of my life (Facebook lingers for keeping up with a few people but its access is severely limited).
However that does nothing to stop my data that may happen to be on my friends phones (like my cell phone or chats for example) from being gobbled up by both of these companies if they allow it.
There is basically no way for me to stop Google from buying my credit card information. Just saying use a different card really isn't a realistic option.
These laws are necessary because there are many instances of a complete lack of consent on my part for these companies getting my data.
I’ve never understood the specific privacy concern I should have about my genetic code. The things I consider private in my life never surface in my biology, they surface in my behaviors, experiences, thoughts, associations, and other aspects of my lived life. While I don’t want insurance company to adjudicate coverage and claims from it, I don’t know what hackers or customers of illegally obtained health ordnance dna data are reasonably going to do with it that impacts me in the least. I’d love to be convinced otherwise, but at some level I almost am at the level of anonymized genetic codes should be generally available with health histories for researchers and not locked away behind privacy screens more rigorous than everything I actually wish could be kept private.
The big news article was people breaking into 23AndMe accounts to find Ashkenazi Jews. That’s pretty creepy, IMO. I mean there are obviously countries out there that would weaponize that sort of info, doubly so if you look back in the historical record.
In a perfect world, nobody would worry about their ethnic or racial information leaking. If you know how to get to that world I suspect lots of people would be interested in going.
Maybe that level of privacy isn't something that you worry about. That's perfectly fine. We all decide for ourselves what risks we consider acceptable.
But we should also support others being able to have privacy even for those things we aren't personally worried about. It's almost certain that there are other things they don't worry about, but that you do.
If there are genetic factors that predispose certain behaviors/traits, those would almost certainly be used for targeting marketing/manipulation campaigns.
Seems like good advice, but I’m not sure the police are buying illegally obtained DNA sequences from hacker groups. (But I’m sure there are nations that do)
You can take personal accountability for driving collective change, which is what is required here. 23andMe's business practices should not be legal, they are recklessly endangering the lives of people who have never used their service or agreed to their terms, and the only thing I can see that can stop them is legislation.
Business practices of not caring about securing the populations genetic information with even basic measures such as enforced two factor authentication and compromising the lives of many innocent non-customers through a basic low-skill attack like credential stuffing.
I would straight up say they need to be regulated as a healthcare provider and be subject to the security provisions of HIPAA. That would be a start.
The law (HIPAA not HIPPA) would need to be changed because they aren't a healthcare provider. I'm not sure if they are a BA (business associate) in any contexts, either.
The ability of relatives, friends, and acquaintances to completely invalidate my own actions is the entire reason this advice falls flat. Unless the advice is to simply never interact with those people to the point of my friends and family not even having my phone number. Personal action is nonviable when the entire rest of the system is built is sidestep it.
> You can't control what your relatives do, unfortunately.
That’s exactly what laws are for. Making “best practices” enforceable. Imagine if not killing people was just considered a “best practice” rather than something you would face severe consequences for.
In this case the best practices need to be enforced against 23andme and their ilk.
I never used 23 and me but I did use Lemonaid Health a few times several years ago and found out that apparently I too was affected by this hack in the form of an email telling me to reset my password - with no other information.
Apparently 23 and Me bought Lemonaid at some point. And I only ever used Lemonaid to get prescriptions for medications that really shouldn't even need prescriptions but the United States healthcare system is insane.
It's just a comedy of late stage capitalism. And isn't the one privacy law the US does have HIPAA? So shouldn't this run afoul of that somehow?
I'm aware HN has a dim view of the GDPR, but I previously worked in compliance and it was a sea change in how big corporations and organizations viewed data collection.
User PII and especially sensitive data suddenly was viewed as "toxic" and that having it around was something that could only bring them hassle.
California's data privacy acts are similar (but much more narrowly focused).
Also, I always like to sum up what the intent of these acts typically are and what compliance means:
- Tell people what data you're going to collect and, what you do with it, who you share it with
User data should be treated like uranium, not like oil. Both are valuable, but you don't want to just accumulate and store uranium. You want only as much as you absolutely positively need, hold it securely, and then to get rid of it as soon as you don't need it anymore.
The United States is unlikely to have a national privacy law in the foreseeable future due to the extensive lobbying by companies that depend on violating the privacy of its citizens. For the same reasons we are unlikely to have true Net Neutrality, there is too much money opposed to it.
The United States can’t even carry out basic governmental functions. The House of Representatives is without a speaker going on a month now, and each election cycle is degrading to the point that it may stop functioning properly.
We are unfortunately well beyond the point of expecting the legislature to legislate, let alone pass robust beneficial laws.
What's missing is a catastrophic privacy event. People have been yelling from rooftops about how terrible losing your privacy could be, but the horror stories aren't coming true. There aren't any wide-scale or pervasive negative consequences of a loss of privacy, and there are a lot of immediate positives.
An insurer needs to start disqualifying anyone who, I dunno, is shown in Google Maps to be out at bars past 11 PM or something like that. Until you can show the practical problems associated with a lack of privacy, people won't care, and I would even argue that those people aren't entirely unjustified in that belief.
What's missing is a catastrophic privacy event. People have been yelling from rooftops about how terrible losing your privacy could be, but the horror stories aren't coming true.
Now in this particular case the magnitude of his crimes make it a little hard to feel bad for him, but once the method is established the only thing we have restricting its replication for more petty crimes is the discretion of law enforcement, and explicit legislation.
I think you're missing the point if your example of "wide-scale or pervasive negative consequences" is "one guy who definitely murdered a bunch of people got caught".
I imagine it would take something like a sympathetic party like a child being identified, located, and murdered using breached data before a "Megan's Law" for data privacy passes.
Nope, as that's basically like being struck by lightning. Needs to be both common and punishing, as that's what the doomsayers are claiming is inevitable.
Reading that table, it sounds like 23AndMe should be classified as a Health Care Clearinghouse. I'm assuming they send samples to a lab for processing, and then convert those results into a standardized data format.
No, the clearinghouses are middlemen between healthcare providers and health plans. They are a covered entity who deals in processing data for and from other covered entities. I don't think 23andme fits any of the bins. They're just a non-healthcare-related company that ends up with data that would otherwise have only been available to healthcare providers (and the other covered entities they deal with).
It was written with a narrow focus on regulating the healthcare system rather than explicitly and comprehensively protecting a class of data, so it just wasn't meant to impose any restriction on every other party a person might disclose medical information to.
not really understanding your point in both instances the hacker or the attacker is the responsible one not the victim. but the user in 23 and me could use basic password practices to prevent this form of attack.
in the street, there's no clothing that can keep you safe
Thank you for calling this what it is - a hack - despite 23andme's strenuous efforts to paint this as the fault of millions of users (see https://blog.23andme.com/articles/addressing-data-security-c...) rather than owning the vast technical or management failure that allowed this to continue undetected for months.
Without the risk of a giant fine or, say, jail time, many tech giants can and do get away with managing their data security badly.
That's right. It's happened before, and will continue to happen as long as there are no consequences.
Note that 23andMe is not the first online genealogy service to get hacked:
These are incidents that have been made public as required by law. There are surely thousands of other smaller incidents that are not reported, as well as major breaches that the companies themselves don’t even know about yet. And it will continue for years to come until lawsuits or brutal regulations with teeth are enacted.
To be fair, 23andme wasn't hacked, at least not with the traditional definition. Their users who were reusing passwords got hacked.
That said… Should they have required 2FA? Yes. Should they have taken proactive action on behalf of their users when they appeared in "Have I Been Pwned"? Yes.
We have businesses that are explicitly built on violating privacy.
We have businesses provide services that require them to collect some private info. I’d put 23andme in this bucket.
We have businesses that have lax security, and actually get their systems broken into.
We have businesses that have fine security, but don’t force users to have good, unique passwords and 2FA. 23andme is in this bucket, right?
The first, we should be happy to run them out of business, like we should actively write laws that try to destroy them.
The third, we should fine them to the point where skimping on security is never a rational decision (and if that runs companies out of business, fine).
The second seems not too bad, every medical-field-related service is going to have some private info necessarily (for example), as long as they don’t exploit it that seems fine.
The fourth seems not so bad, there are all sorts of services that are not so important. I don’t have 2FA on, like, random forums and video games, who cares?
Combining two and four is pretty bad though.