Hacker News new | past | comments | ask | show | jobs | submit login
Google's ‘Project Nightingale’ Gathers Personal Health Data on Millions (wsj.com)
399 points by big_chungus 24 days ago | hide | past | web | favorite | 266 comments



Google was recently handed the confidential medical data of 1.6 million patients treated at a London hospital (Royal Free), for use by Deep Mind, in breach of the Data Protection Act: https://www.theguardian.com/technology/2017/jul/03/google-de...


I'm actually still so struck how literally nobody seems to give a shit or even know that this happened. It's really sad.


I’m surprised that people are still surprised by this, but then I’ve gotten pretty cynical at this point.

This phenomenon goes way deeper than people not caring about privacy. It underlies everything from the bystander effect [1], to the Nuremberg defence [2], and everything in between.

People just don’t want to organize, generally speaking. The people who do usually end up as part of the elite. The rest are just focused on their individual lives.

[1] https://en.wikipedia.org/wiki/Bystander_effect

[2] https://en.m.wikipedia.org/wiki/Superior_orders


> People just don’t want to organize, generally speaking

They used to far more. The eighties and nineties disassembled many of the places and means to organising. Disassembly of civic community, unions, restrictions on demos, decline of social societies such as Lions Clubs and others. Now we're all individuals, organising alone.


Related:

https://splinternews.com/organized-labor-is-in-a-life-and-de...

"For an entire century, the corporate class in America has dreamed of destroying organized labor, which eats into their ability to amass endless profits. They’ve chipped away at unions for 40 years. Now, it is no exaggeration to say that they are two steps away from total victory."

...

"This is a very scary time. Take it seriously."


[flagged]


I watched that doco, waiting for superman. Man the education system is crazy if it’s true.


I'm not too familiar with teaching in the US. Could you elaborate on the problems caused by the teachers' union?


Basically, the union puts teachers before students. Which doesn't sound too bad, until you realize they always put teachers before students. They make it impossible to fire teachers who are known to be awful, even when they've been offered incentive programs to eliminate bad teachers (they would literally be paid to fire bad teachers).

This doesn't sound too bad, until you see what happens after decades of rot. The American education system, according to anyone who's ever tried to fix it, is legally unfixable so long as the union exists. That union can block any change from happening, and they block everything that would threaten any teacher. The reason we don't hear more about this is because rich people send their kids to private schools. I'm not saying the education system is simple and eliminating this union will magically create the best education system on Earth, but I definitely believe they single-handedly share more responsibility than everything else combined, including poor funding for public schools.

While it's obviously trying to convey a message, I'd strongly recommend the documentary Waiting For Superman.

https://www.imdb.com/title/tt1566648/


> EDIT: I really fucking hate when people edit their comment after people have already replied to it.

My message is just a link and the quote from there and it was just that since the beginning, so do you really think I've edited anything that changed the meaning the message, or is that "edit" of yours unrelated to my message?


I apologize, I remember the comment being longer. I also did not mean it as an attack against you specifically, more that the act itself is annoying (not necessarily that you had ill intent, it could entirely be coincidental).

I realize this is HN and we are expected/encouraged to interpret people's meanings with the best possible intent, but rereading that edit I can understand why you would feel otherwise.


> (not necessarily that you had ill intent, it could entirely be coincidental)

I'm sorry, but in this case I'm absolutely sure there was never anything but the quote there, and the confusion is due to some other comment.


I agree with you the mistake was mine. What I meant by

> it could be entirely coincidental

is it's possible someone edits a comment before they see there's a reply to it (this is pretty easy with HN's interface).

At first I was frustrated because I thought your comment had been edited after I replied to it, and then my second comment was an apology for erroneously believing it had been changed.

I apologize again for the confusion.


> is it's possible someone edits a comment before they see there's a reply to it (this is pretty easy with HN's interface).

For years I use "delay" in my HN user settings:

https://news.ycombinator.com/item?id=231024

(Just to point to those that missed that possibility).


Noam Chomsky about the lack of continuity: https://www.youtube.com/watch?v=5meC4Z61qGg&t=15m45s


you do realize that the bystander effect hasnt actually been observed in reality, right? i mean, just look and the examples in the article. every single one had people reporting it.

and the fiasco that started that conjecture was `Kitty Genovese` murder, which was repeatedly reported as well. the police just didnt care enough to actually dispatch someone at that day. or are you insinuating that random people from the street should interfere if there is an armed perpetrator running around knifing people?

or `Raymond Zack`, ...how do you expect random people to actually do something? he wanted to kill himself. if he wanted to get out of the water, he couldve done it within the 60 minutes he was just standing in the water. and after he collapsed someone even went in. despite the fact that its incredibly dangerous to try to help someone who is actively trying to kill himself. if that doesnt disprove the conjecture by itself, idk what could.

each and every example is just showing that there are tragedies which the police arent able to handle. that doesn't mean that the bystander effect is actually real.


You could argue that it's not that people don't want to organize, it's that they can't, because of structural reasons. Scott Alexander wrote a lot of words going deep into this problem: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/.


I’ve read that piece many times. It’s frustrating because I could point to it for all societal problems and it doesn’t offer any solutions.


This is what I criticize about the increasing pseudo-individualism in the western world.

If all of us want to be completely different from all the others we don't have many things in common anymore and there is no reason to unionize or fight for the same goals.

Every little protest or group can then be shut down or silenced just by stating how they are just representing a really tiny minority of all the people.

I know this could spark some debate but I think it is a point where Marx was right all along. It's about the things we have in common that help us to collaborate to tackle our problems and we are the strongest when working together.

I've gotten cynical as well as I feel like constantly talking to the bus full of the people who are interested in their freedom which just doesn't arrive somewhere.


It's an interesting issue. My start up is in the healthcare space, primarily in the USA although I'm currently based in London. Theres definitely a group here who think that NHS users just need to accept that their data is going be fed into models and its part of the price of universal coverage as it will help outcomes and costs etc. I'm still on the fence about it, but having seen some of the results that AI is catching out in the field I'm tending towards universal good over personal privacy - though I may regret that... Im thankfully that at least the data cant be used to adjust individual insurance premiums.

As an aside, it is not just Deepmind/Google, all the players are trying to get all the EHR data around the world. The NHS is particularly attractive due to the volume of historical data of course. Its no secret IBM Watson is working with NHS Wales although I don't know what data sharing agreement is in place.


> NHS users just need to accept that their data is going be fed into models

That's fine so long as it goes nowhere else - it's perfectly reasonable for the NHS to have AI models to do their thing, just as the Met Office and others do. Remember years ago Google had a Search Appliance? A rack mount server to put in your business. The NHS should be doing the same with IBM, Deepmind and other models - figure a way to licence and use the technology without sharing any data back whatsoever.

Data for training, helping IBM et al building better models should be 120 years old or more - guaranteed not to contain anyone alive. The restrictions on medical privacy are there for very good reasons, though I tend to the view they don't go nearly far enough for the modern world.


Some things are genetic, so even if it is the data of your parents, it will still influence you.

Only way is to completly anonymize data, which is way harder than it sounds.


> Only way is to completly anonymize data, which is way harder than it sounds.

What's complicated about it? So long as it's not attached to a name, social security number, or some other personally identifiable marker - how would anybody know it's you? You're just an anonymous collection of data points.

I understand whatever fingerprint they use for your data is almost definitely completely unique, but that doesn't mean it can be associated with the corresponding person unless you allow that to happen. I mean I get that it's obviously not trivial from a security perspective, but what's the main blockade here?


For most people in the last 10 or 15 years, search and Wikipedia very often (probably almost always barring casualty treatment) presages what they later go to their doctor about.

What's complicated is Google and others are extreme data hoarders. Google can get the anonymous health records and run a fuzzy match of model records against Home, URL and gmail records. Find high confidence correlations against multiple conditions across the years, where the search preceded new medical entries for those conditions within x time. Now they're in this data set for condition y and oh lookie, they searched condition y related topics multiple times last year. Anonymous record i1389541 is Alice Smith of Wimbledon, set her adwords flags to condition y snake oil, sell record to UK life insurance consortium. [repeat]. No personal data was released, but you're fully de-anonymised.

With enough data and history it becomes absurdly easy to decloak the majority with high or complete confidence.

Welcome to the future.

The alternative is find a way to prevent those uniquely identifying correlations at any point in the future, using techniques we haven't yet thought of. Which is probably impossible without diverging the anonymous medical record some considerable distance from reality. Faked enough to remain anonymous might not be helpful for research into a condition or its diagnosis...


The obvious solution to this problem, assuming it is as pervasive as you describe (I feel you are slightly exaggerating, but it's really not the important part), is to regulate the data hoarders.

That said, that's completely different from what I implied in my original comment, so point taken.


Sure regulation is the solution, and I think a very slow journey to regulation has started. Whether it will come soon enough or be adequate are open to question. We've known Google and FB have engaged in systematic overreach every single time there is data for the taking. From streetview harvesting WiFi, or taking photos over privacy fences through systemic use of dark patterns.

If I'm exaggerating it's from trying to make up a credible example off the cuff. When the Snowden revelations were coming thick and fast I remember several examples coming up around search stream and meta-data. Mostly terrifying as either could reveal in far more detail than actual content of messages what you appeared to be thinking in the moment. Tie that with medical and health data and any pretence of privacy falls.


Force them to use old-school floppy disks to store and transmit records (you know, for backwards compatibility). That should be a good incentive to minimize data hoarding. /s


There was a good talk[0] on latest London CryptoDays. There were many talks about applications and attacks on differential privacy, but this one was quite remarkable in highlighting just how hard it is to remain anomymous against good traffic analysis.

The key insight[ß] was that privacy researchers are essentially rediscovering the information-theoretical deanonymisation attacks that have been known to advanced marketing departments for the past two decades.

0: Evgenios Kornaropoulos, UC Berkeley: "Attacks on Encrypted Databases Beyond the Uniform Query Distribution"

ß: anyone present at the last talk should recognise the term used.


being fed anonymized vs with full details is where I'd draw a line


If you've ever been to a hospital in the US then your health data has definitely been sent to a third party to be fed into models. It's standard practice pretty much everywhere. With that said, the third parties are (generally) subject to all of the same HIPAA regulations that providers are. They sign contracts that effectively make them covered entities under HIPAA.


This isn't really news, it was heavily reported on BBC at the time: https://www.bbc.com/news/technology-36191546 (and multiple later articles). I'm not directly affected by this but at least at the time I recall this was well reported and I knew about this.

On the other hand personally I wish my medical history could be scanned and screened for some preventable terminal diseases by some AI (with some form of consent though preferably).


I'm not going to dispute that this is wrong (the ICO determined so), and that Google should be punished for breaking the law.

That being said, I would also respect someone who said that it is nice to see Google aid in medical research and see some of the best minds of ML allowed to access high quality data to develop these solutions.

This was not a Google ad network integrating medical records in order to target the sale of medicin... It was a medical research project.

I'm going to get fired up once someone gets hurt. I know someone is going to call slippery slope, but until someone can prove harm (beyond the broken trust and principle of data sharing), I'm going to save my pitch fork for bigger problems.


Everything at Google is ad related and the output from these analysis will eventually become part of Googles ad algorithm.


Everything in google cloud and paid GSuite is 100% not used for anything ad related and you can check the terms and conditions yourself.


Why are you surprised? I can count on 1 hand the amount of people I know personally who care about data privacy or anything remotely close to the idea of data privacy. However, I do agree, it's really sad.


Do those people use a credit card or cell phone? I think many people care, but not enough to stop participating in society, or even enough to stop posting on their favorite web forums. Modernity demands compromise; we abide.


As Snowden said the problem isn't the data policies, but the data collection to begin with. Solve the latter and the former is irrelevant.


I hope they find a way to properly anonymize the data and move forward. A minimum price to charge would be to make the results public. Even better would be a way for the public to analyse that data via Deep Mind, like an observatory telescope with public access time. There is likely life saving data to be found here. It would be a shame for privacy concerns to prevent studying it.


Even if they did there have been many, many HN stories of data being deanonymised absurdly easily, and health data tends to unique very quickly indeed.

How many females in a (county|country|postcode} born on 5 March 1942 are there with a kidney problem? Particularly in the UK where a postcode, even the locality selector (the first half, e.g. M50, NW11) is more precise than many US zipcodes that can cover many square miles. A full UK postcode narrows it down to within the street - a dozen or two houses usually. What if there is associated medical data like medications or chronic conditions - diabetes, arthritis etc?

To make it suitable for study a) keep it within the NHS or relevant health authority - perhaps allowing an airgapped DeepMind server to come in process locally and afterwards get wiped or shredded, or b) remove all personal data including gender, dates, associated medical info of any type - i.e. render it almost useless for study.

Oh and c) imprison all the management who agreed to share this data without patient consent and in breach of DPA, and require Google to remove all trace under audit.


There are mathematical ways you can guaruntee different levels of anonymity though without removing all identifiers. Ex. you can increase age and location buckets until for every unique set of identifiers, you have a good distribution of attributes.


And yet, you know this will not happen, because it wont yield the best and most profitable model for Google.

Google will do what they have always done, ignore privacy, train models on full data, apologize later ans delete data when no longer needed.

You dont know this, but people at Google are Gangsters and bullies They dont give a hoot about laws in Europe, about privacy or any of that. They think they are right and that everyone else is wrong.

See the guy above who said this is for the collective good, even though everyone who knows even a little bit stats and machine learning knows full well that these sort of sensitive data will lead to unimaginable grief for outlier patients. In particular, because Google cares about money, and not the collective good!

Everyone should be outraged, it is the only rational reponse.


What is this "unimaginable grief" you think outlier patients will receive if anonymized data is provided to Google? Seems a bit hyperbolic.


What’s your goal, though? Is there some harm you’re seeking to prevent? Are you interested in privacy for privacy’s sake? Have you considered the potential benefits of this research that might be lost under your approach?


Companies (like Google) exist for the purpose of making a profit from your data. Their interests and yours are fundamentally not aligned.


As opposed to farms, banks, utilities, service providers of all types, manufacturers of just about every good in existence?


two words make the difference: "your data"


Who said that proper anonymisation doesn't remove exact birthdates and fine granular location? Proper anonymisation techniques are much harder to attack.


Anonymisation after it's all gone to Google is rather moot, don't you think? Besides in this particular case they rode roughshod over both DPA and our entire structure of medical ethics and sent across entirely non-anonymised data.

"proper anonymisation" is a set of goalposts that keep moving - it's like password length or bcrypt work factor. What's enough today won't be enough with CPU, or in a year or three, or perhaps a suitably fancy AI from the world's largest data hog. Proper anonymisation likely requires all associated data - any dates, blood group, medications, chronic conditions etc removing, hence mention of making it so bland as to be almost useless for study.


Heh you go on an on about an imaginary weak anonimisation technique then say all is moot and again go on and on about pure speculation, It is you who is moving goal posts.


Interesting how you're glossing over the fact this violated the Data Protection Act.


Google DeepMind's acquisition of Hark (https://www.businessinsider.com/google-deepmind-sets-up-heal...)

Was apparently to get the team that could get this sweeping level of data: https://www.businessinsider.com/google-deepmind-sets-up-heal...

The Hark team (https://www.harkapp.co.uk) brought huge influence in the NHS and at the national level: https://en.wikipedia.org/wiki/Ara_Darzi,_Baron_Darzi_of_Denh...


Serious question: is it the same law that requires websites to give a cookie pop up?

Because I let users know I'm not going to spam them with that shit at those that enacted that law can go to hell for every site I visit having it. It's a good thing but really annoying. There has to be a better, more interesting way.


Agreed - especially on mobile a full page thing I have to click through to accept the cookies?? If I didn't want cookies I'd just block them and see how web worked - this should be browser setting, not a required click through on every website.

Your browser controls your cookies - use that, clear them, do whatever you want.


They are different but well related 'laws'.

The 'Cookie Law' is the 2011 ePrivacy Directive. it will be replaced by the ePrivacy Regulation (eventually). https://www.cookielaw.org/the-cookie-law/

The modern privacy law is GDPR, which came in in 2018. Pre-dating that were the somewhat country-specific interpretations of the 1995 Data Protection Directive.


For the people defending Google, can someone explain a 'positive' take on making this arrangement completely in the dark? As far as I know other companies working on healthcare make big announcements on their partnerships (see https://www.apple.com/newsroom/2019/09/apple-announces-three...). Meanwhile, this is uncovered by an exclusive leak to WSJ. Why are they hiding it? Competitive advantage or they are aware of the dubiousness of them doing this sort of moves? Anyways, I hope we get a better message from Google than "Google spokeswoman said the project is fully compliant with federal health law and includes robust protections for patient data. An Ascension spokesman had no immediate comment".


Apple's product is launched commercially. Google's alleged product is still in the research phase. Neither company touts its research projects that aren't out in public yet.

Google's launched healthcare products are here: https://cloud.google.com/solutions/healthcare/

It's hard to say what's going on in the WSJ article, since it has no details.


There are a million reasons not to make an announcement until a later stage. Here's just one: if they announce, and end up not releasing anything (for whatever reason) people will cynically point to it as another example of something promising killed by Google.


“cynically”?


Eh? It's obvious: this is a pilot project where PII is screened out.


The article very clearly says:

> The data involved in the initiative encompasses lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, including patient names and dates of birth.


Anonymized PII involves names, yep. TYL.


While it's not sure what Google is doing here, I can tell what most companies in the healthcare data space do when dealing with data like this - they use an external vendor to essentially encrypt the "personally identifiable data" (mame, exact dob, etc) into some hash tokens before the data hits their servers from the partners. This would mean the data is still tied to a patient but not easily identifiable (at least some orgs would run reidentification risk analyses based on the types of data available within the org). It would also mean that an org like Google can (assuming it chooses to) still work with healthcare data without explicitly tying it to all their other data in a legally compliant manner.

Maybe a HIPAA expert can chime in but I'm reasonably sure it will be a violation to show you ads on your devices based on your personal healthcare data that Google obtained without your explicit consent. It might however be legal if Google uses all the patients healthcare data to train a model that it can use on all users, though.


There would definitely be a BAA in place between between the providers and Google. My business has one with Google for hosting our cloud data. You know those forms you sign about HIPAA when you visit a provider? One of the things they say is the provider will only share data when a BAA is in place - well that the HIPAA rule:

"The Privacy Rule allows covered providers and health plans to disclose protected health information to these “business associates” if the providers or plans obtain satisfactory assurances that the business associate will use the information only for the purposes for which it was engaged by the covered entity, will safeguard the information from misuse, and will help the covered entity comply with some of the covered entity’s duties under the Privacy Rule."

https://www.hhs.gov/hipaa/for-professionals/privacy/guidance...


If Google is getting Protected Health Information (names, addresses, dates, etc.) from providers then they have to sign a Business Associate Agreement, which effectively makes them a covered entity under HIPAA. They would be criminally liable for any misuse of the data, which would include using it to inform their advertising.

Using the data to create a model for use at the providers would be a clearly proper use of the data. That's standard practice across the industry.


What about Google employees looking for damaging data on adversaries? Or stalkers, looking for information on particular individuals without consent.

Why does Google get PII?


To take your argument to its conclusion, what if someone at Ascension wanted damaging information on one of its patients?

They'd be legally liable. Same as those they sign BAA with.


Those are both illegal as well, and the employees who did that would be held personally liable.


There would definitely be a BAA in place between between the providers and Google. My business has one with Google for hosting our cloud data. You know those forms you sign about HIPAA when you visit a provider? One of the things they say is the provider will only share data when a BAA is in place - well that the HIPAA rule:

"The Privacy Rule allows covered providers and health plans to disclose protected health information to these “business associates” if the providers or plans obtain satisfactory assurances that the business associate will use the information only for the purposes for which it was engaged by the covered entity, will safeguard the information from misuse, and will help the covered entity comply with some of the covered entity’s duties under the Privacy Rule."

https://www.hhs.gov/hipaa/for-professionals/privacy/guidance...


Anyone else find it strange in the US with the self-insured health care plans (the norm with large companies) your company has complete visibility into every health care interaction you have while employed through claims data?


This is not true. I don’t like having my health care tied to my job - they are orthogonal concerns. But in the US, your employer does not have access to your healthcare records.


If it's a "self-insured" plan[1], your employer potentially has access to all insurance claims because they are the insurer. That's not the same as access to complete healthcare records but it's pretty telling information.

[1] https://www.healthcare.gov/glossary/self-insured-plan/


> If it's a "self-insured" plan[1], your employer potentially has access to all insurance claims because they are the insurer. That's not the same as access to complete healthcare records but it's pretty telling information.

No, companies almost never manage the claims themselves. They underwrite the plans, but managing claims is cost-prohibitive, as it's outside the core competency of most large companies.


Sure but they can get access to the claims data from whomever is managing it as they're paying for them.


Hence Tim Armstrong's comments about AOL spending a couple million dollars on "distressed babies" being the reason 401(k) benefits were cut.

https://www.newyorker.com/news/amy-davidson/whose-distressed...


They definitely do if they’re self insured. That doesn’t mean your boss can just pull it up willy-nilly, but I promise someone in HR can. I’ve sat in on conversations at my company where we’ve been pitched on claims analysis tools for our own HR department to use.


> They definitely do if they’re self insured.

Self-insured doesn't mean they manage claims themselves. For most companies, managing claims would be cost-prohibitive and an absurdly wasteful use of resources.


Most companies use a Third Party Administrator (TPA) for the actual back office claims management. That doesn’t mean people in HR and Finance don’t have visibility into claims data.


> Most companies use a Third Party Administrator (TPA) for the actual back office claims management.

Yes

> That doesn’t mean people in HR and Finance don’t have visibility into claims data.

I mean, this is technically a true statement - "A does not necessarily imply not-B" - but kind of irrelevant, because as it turns out, HR and finance generally do not have visibility into individual-level claims data (as opposed to aggregate data, which is necessary for underwriting).


>, HR and finance generally do not have visibility into individual-level claims data

I'm not debating with you but somehow, Tim Armstrong (then CEO of AOL) found out about healthcare claims for one of his employees.[1] The story isn't clear on whether the granularity of the claims data would reveal to Tim who the particular employee was. (Even if Tim didn't know the exact employee, I'm sure he could ask a few questions and/or look at employees' sick day records to figure out which employee it was. If the company pays $1 million in a health claim, that's not necessarily going to stay a secret.)

[1] https://slate.com/human-interest/2014/02/tim-armstrong-blame...


I mean key word “generally”?

I’m not suggesting that there’s a team of employees watching you every time you pick up a prescription, but of course they could access an individual claim if there was a legitimate reason to. Not sure why it’s so unbelievable to think that an insurer would be legally barred from doing so if they are literally taking on the risk themselves, even if it is to audit the TPA to make sure they aren’t stealing from them.

If you are an extremely ill employee and you’re using a lot of healthcare your employer will know without you needing to inform them.


Remember when Tim Armstrong, CEO of AOL, blamed an employee's new baby's health issues on benefit cuts [1]? He may not have had "full access", but he basically blamed an employee's family health issues, publicly, as the reason for employee benefit cuts.

[1] https://slate.com/human-interest/2014/02/tim-armstrong-blame...


Your sentence confuses me. Did he blame the cuts on the baby? Or the baby's issues on the cuts?


It’s in the linked article. The CEO (after a great financial quarter) announced benefits cuts, blaming costs and using as examples two “distressed babies” that cost the company $1M each. Armstrong was making $12M, by the way, so in other words, 12 Distressed Babies per year in compensation.


They do if the plan is self funded. If it’s not, they still get anonymized usage reports from the carrier that list all the claims with basic demographic info like gender, age range and whether it was the employee, child or spouse who the claim was for. You probably wouldn’t know if your company’s plan is self funded or not.


They definitely have access to at least aggregate data, see this kinda thing: https://collectivehealth.com/solutions/actionable-insights/


Its funny reading the comments to your comment.

For the most part people can't seem to distinguish claims data vs healthcare records...its pretty scary as these are the most likely people to receive funding to "disrupt" the health care industry.

Claims data is not equal to medical records, but to your point it is extremely valuable data and can easily act as a proxy for medical care. Even more so when tied together with Rx claims data. Even providers (Doctors/hospitals) themselves don't have access to insurance claims data for patients...which unironically results in perhaps a million hospitalizations and billions of dollars in healthcare costs per year because of the lack of data sharing with providers.


It’s actually pretty shocking how few people seem to realize this about large employers...this is clearly not well understood at all.


As you mentioned, claims data is arguably worse from a privacy standpoint. I work for a hospital and we constantly struggle with the fact that we can't assume we have a full medical record for any given patient. As an easy example, we have no idea if our patients actually fill their prescriptions properly once they leave, making it impossible for us to do any sort of intervention on medication adherence.

With that said, we actually do have access to claims data for a significant chunk of our patients now, and that percentage is growing fast. It's part of a big push towards shifting risk from payors to providers to incentivize cost reduction.


>As an easy example, we have no idea if our patients actually fill their prescriptions properly once they leave, making it impossible for us to do any sort of intervention on medication adherence.

This is probably the lowest hanging fruit of all waste in Medicare. Consider the average Medicare patient has 7 prescribing physicians and 10 prescription therapies, yet not a single physician has access to the claims data to see what another physician has prescribed, leading to duplicate therapies and adverse drug interacts at ridiculous rates. These are generally for chronic care patients also, meaning complications with diabetes, blood pressure, cholesterol, etc... resulting in costly hospitalization.

MTM was a step to fixing these issues, I am guessing perhaps what you may be referencing in improvement in sharing claims data (maybe you use OutcomesMTM), but realistically MTM is just a program for insurers to monetize their claims data.


"potential founders of healthcare startups" are likely to be in the set "reads Hacker News", but would be a tiny subset.

Even if they all know the difference between claims data and healthcare records, that wouldn't show up in the commentariat at large.

Your fear is unfounded.


It's super strange and ripe for exploitation. The US needs some form of single-payer health care badly.


We do need universal healthcare, but why does it need to be single-payer specifically? Most countries with universal healthcare aren't single-payer, and what they have works just fine - not any worse than single-payer universal systems.


Are you sure the government is any more trustworthy? More importantly, are you sure that _everyone elected in the future_ will also be trustworthy? Remember, the "other side" can always gain power, so any system you conjure needs to work under the control of either party.


Yep, I am confident the government is more trustworthy because you and your employer have a conflict of interest here. Their goal is to keep you working or reduce costs, your goal is to get healthy. Also, as a voter, you control the government. As an employee, you do not control your employer. Just look to literally every other country doing this.


> Yep, I am confident the government is more trustworthy because you and your employer have a conflict of interest here.

It is such a relief to read this position on Hacker News, for once. I am continuously astonished by tech workers' refusal to recognize these conflicts between our interests as individuals, and our employers' interests in profitability. Work only gets done when these interests find an equilibrium.

I am especially concerned with how little my employer considers my long-term health. With tech job turnover rates universally understood to be a handful of years, I know my employer is unmotivated to maintain my long-term health, preferring to accrue "tech debt" in me, a jettisonable unit.


They say you can vote with your pocket book but that doesn't apply to monoplies with hundred billion dollar war chests that can coast for a decade without making any profit what-so-ever.

Our vote applies immediately, not to mention there are term limits, so I trust the government much more also.


Well, it's the government that keeps health data sort of confidential now. If the government stops caring about that, I doubt private companies would step up.

If you have to trust the government on this anyway, cutting private insurers out seems to be minimizing risk, no?


> Well, it's the government that keeps health data sort of confidential now.

The government plays almost no role in keeping most health data confidential, since most PHI is held by private entities.

And it's not like the government has a great track record of what little PHI it is responsible for - there have been plenty of breaches against CMS and Medicare affecting large numbers of patients.


Without the US government playing it's current role, I am 100% confident that the US health data that is currently confidential would stop being confidential. Just as we see with companies adding spyware to software, selling out user's data is profitable.

Why would for-profit companies keep the data confidential when they can make more money selling it?


> Without the US government playing it's current role, I am 100% confident that the US health data that is currently confidential would stop being confidential.

The fact that the government has passed a law that requires private entities to keep PHI confidential says nothing about their competence in managing PHI of their own. The original claim is about the latter.


If my employer has the data - the government can subpoena it if it really wants to. It's effectively accessible to both entities. If only the government has it, at least my employer doesn't have it.

Edit: More damningly, if my employer has it multiple governments potentially have access to it. Companies generally aren't willing to stand up to requests from, for example, China.


> If my employer has the data - the government can subpoena it if it really wants to. It's effectively accessible to both entities.

"The government" isn't a single entity, and most of "the government" can't access your data via a subpoena any more easily than a private company can.


It’s a lesser of two evils thing. Personally I find it creepier for a private employer to have that data than a (non-authoritarian) gov’t.


The govmt hopefully has learned from the OPM breach [1]. Also, the USDS has a good track record and has already worked on multiple healthcare related projects [2]. They recently also revamped the VA claims system [3].

[1] https://en.wikipedia.org/wiki/Office_of_Personnel_Management... [2] https://www.usds.gov/projects [3] https://twitter.com/USDS/status/1192520880733208576


The VA seems to work approximately as well (or as badly) regardless of who's in charge.


Oddly enough, the VA works far better today than it did 3 years ago. YMMV.


In your opinion, what changes made that happen? When did the changes start, and how did they affect care, scheduling, etc.?


the VA Mission Act Trump signed into law

Basically it gives them access to more private healthcare options. If they don't get timely access, they are covered under any network.

> "We are offering them choice in the medical marketplace, and we now have, thanks to the president and thanks to choice, the highest veterans satisfaction rate in our history. We're sitting at about 89.7 percent"

- Robert Wilkie, U.S. Veterans Affairs Secretary


Precisely these two items.


Based on what?


It used to be 18+ months to get a VA claim of any kind completed. The norm now is less than 90 days. I've seen some faster than that. They are a whole different organization now than they were.


This is a misleading statement. Corps that offer insurance as a benefit can get reports that tell them how the benefit is being used by employees in the aggregate. However, corps cannot see each employee interaction as this would be a huge HIPAA violation.


This is not what the parent is talking about. "Corps that self insure" is corporations that are so large that it's more effective for them to use a large bank account as the insurance plan for their employees than to buy a corporate policy from an actual health insurance company.

Huge companies like SAP or Microsoft or Oracle will do this, even though they may also hire a traditional branded health insurer to "manage" this large-bank-account health plan (i.e. give out cards, negotiate with hospitals, etc.). But the bank account (brokerage account, really) remains in control of the corporation, and the traditional plan premiums are not paid to the health insurance company named on the card. Fees for managing the plan are paid separately. Doctors and hospitals get paid from the corporation's account. This is what the parent means when they say that the corporation has visibility into employees' healthcare usage.


Just so it's clear to everyone, companies much smaller than SAP or Microsoft self insure. In the United States virtually every employer with more than 500 employees is going to self insure.

And while these companies do have access to detailed claim data in theory, in practice access to this data is typically heavily limited a small number of HR or finance employees who are responsible for healthcare accounting and financial management and the data presented is typically restricted to a large claim report within a monthly/quarterly reporting period.


They can access individual claims because they themselves are the insurer. Claims have medical codes that map to treatments, diseases, prescriptions, doctor visits, etc. That’s what I mean by visibility.


Unlikely to be a HIPAA violation, pretty standard to get a BAA in place between entities sharing PHI type data. Its spelled out on those HIPAA forms you sign at the providers before they'll see you.

"The Privacy Rule allows covered providers and health plans to disclose protected health information to these “business associates” if the providers or plans obtain satisfactory assurances that the business associate will use the information only for the purposes for which it was engaged by the covered entity, will safeguard the information from misuse, and will help the covered entity comply with some of the covered entity’s duties under the Privacy Rule."

https://www.hhs.gov/hipaa/for-professionals/privacy/guidance...


Your quote misses the most important part:

>>"Covered entities may disclose protected health information to an entity in its role as a business associate only to help the covered entity carry out its health care functions – not for the business associate’s independent use or purposes, except as needed for the proper management and administration of the business associate."

BAA's keep the chain of HIPAA in place but just because a BAA is in place does not allow for violation of privacy.


Agreed. The key point I was attempting to make is that a BAA allows orgs to share data legally. AFAIK Google hasn't violated any privacy with this project and indeed they would be in violation if they did.


Employees must authorize health care providers first before they are able to to disclose any healthcare related information to their employer.

What you're saying simply is not true.


I mean claims data, which is arguably a good record of your present health care intersections while employed at that company. They definitely have access to it.


They do not have access to claims data. They know your plan and your dependents, but that's it.


Yes they do if they are self insured, meaning they are literally the insurance company taking on the risk. This is the norm for large companies.


Where is the goal post? And no even they wouldn't have individual data, even if they self-insured. They don't manage the claims themselves, that's ridiculous.

Do you have any source for your suggestions, or is this just a random hypothetical?


> Anyone else find it strange in the US with the self-insured health care plans (the norm with large companies) your company has complete visibility into every health care interaction you have while employed through claims data?

Self-insured plans means that the companies underwrite their own plans, but they almost never manage their own claims. That would be cost-prohibitive and absurdly disadvantageous. The employer never touches raw claims data at all.

Companies have access to aggregate claims data, but they absolutely do not have "complete visibility into every health care interaction you have while employed".


Most companies use third party administrators to do the day to day management, but that doesn’t mean people in HR or Finance can’t access data on individual claims if there was a reason to. If an employee used $1M in healthcare believe me you don’t have to tell your company - they will know.


by "health care interaction" you would mean every billed / paid charged interaction, even so not usually identified down to a single member.

However, the employer would not have your medical records without consent of the employee, this works for self funded or fully insured.


Claims are a pretty solid proxy for health care interactions. That’s what I mean.


self insurance is not the norm in the US, and in general employers have no visibility at all into your medical records in the US. You've been misinformed.


Self insured is definitely the norm for large companies in the US.


Which ones? I'm sure somebody does it, but it doesn't even exist in my industry.


Surely they could have come up with a less alarmist headline for what seems like a much needed initiative in healthcare innovation.

But that's the case for all tech coverage nowadays.


No, the health care situation in US is just perfect. Don't you try to do anything there big bad tech. companies. All they are going to spend their budget on is to personally identify people with sensitive health issues (eg. erectile dsyfunction) and serve them more ads. Sad! /s


You're blind if you think this is for "healthcare innovation". It's for bolstering what they know about users for advertising, and to combine with their recent Fitbit purchase. How could you possibly still view Google as an innocent company with everyone's best interests at heart?


I would examine your biases to see if they are not clouding your judgment and ability to see the other side.

Without restraint, these efforts will undeniably bolster their ability to conduct targeted advertisements, which is bad. Hopefully regulation and consumer backlash will take care of that.

But you cannot deny the amount of good we stand to gain from an advanced electronic health system. The insights that can be generated from cross correlation of all patient data, the amount of preventative measures that can be taken from detecting problems before they become bigger problems... The benefits far outweigh the negatives. It's unthinkable to have all this data and NOT attempt to find insights in them.


> But you cannot deny the amount of good we stand to gain from an advanced electronic health system.

Not one that has anything to do with Google though.


There is no circumstance where you can reasonably state the positives outweigh the negatives. You are conflating a vague improved digital health system, with google blatantly gaining access to millions of people’s health data without consent. They have absolutely nothing to do with each other. I’m all for people opting in, but to have no say over random google employees accessing my health data? There’s no way that’s a positive. Perhaps check your own biases? Thanks


They employ a lot of talented people and they are among the few organizations that have the resources and will to actually solve these intricate problems, so more power to them as far as I'm concerned.


Tautologically, you've described every organization in the health tech space. Having a surveillance company secretly start transacting patient health information for their own profit isn't a solution in search of any problem I want solved.


What problem? Please tell me, what is this "solving" besides providing user data without consent to google?


Does a good system have the potential to be good for patients? Sure. Will Google do that? Probably not. There's no incentive for them to improve health care for the patients; there's no profit. There is, however, profit in ads, and in increasing insurance margins for insurers and those funding insurance.

I'd like to think that Google is concerned about more than just profit, but they prove again (censored search for China) and again (YouTube policy changes) and again that they are not. Profit rules Google's decisions.

And that is why there's such backlash against Google getting this data. Because the chances of anything good for us as patients is slim to non-existent.


> There's no incentive for them to improve health care for the patients

The healthcare systems they are collaborating with (e.g. Mayo) are strongly motivated to improve health of their patients, particularly in capitated models like accountable care organizations. Mayo is actually pretty famous for adopting the ACO model. Note the author on this article (1) from Mayo Clinic Proceedings is by David Shulkin, who went on to become the Secretary of Veterans Affairs.

You may also be interested to know Ascension is another ACO (2).

So why would an ACO be motivated to work with Google? Because they know reducing diagnostic variance is almost certainly identical to improving quality of diagnosis, which will reduce poor outcomes and reduce malpractice, cost of overtreatment, cost of undertreatment, and so on.

(1) https://www.mayoclinicproceedings.org/article/S0025-6196(12)...

(2) https://www.beckershospitalreview.com/acos-to-know-2019.html


I hadn't heard of ACOs before. Those seem to be some pretty awesome organizations; it does a lot to temper my frustration at the handover of this data.

Do we have any assurances that Google's efforts will be limited to improving care? Or are they being compensated by being able to use that data in other opportunities?


I'm pretty confident they are going to be laser-focused on improving care and will actively shed any work that doesn't advance that goal. Not only their business actions, but also their leadership choices.

In terms of business actions, lets look at the NHS brew-haha. Hard to blame Google for NHS screwing up the research protocol that led to the specific 1.6M patient records being transferred to Deep Mind. And they corrected the research process years ago. They passed muster with Mayo (a deal that no doubt had to pass muster with Shulkin among other world-renown physicians and administrators). They have deals with McKesson, Cleveland Clinic, and now Ascension. These are major players.

Their leadership choices give you additional insight on their motives. Their new Chief Health Officer is Karen DeSalvo, former National Coordinator for Health IT and Acting Assistant Secretary of HHS (and no doubt candidate for next Secretary of HHS). David Feinberg, their new VP for Health, is coming from serving as CEO of Geisinger and UCLA prior to that. These people are reputationally allergic to mixing medicine with adtech.


"such backlash"? you just heard about it.

And there is nothing wrong about monetary profit as motivation, the pursuit of profits has elevated the living standards of billions around the world saving countless lives.


The pure pursuit of profit has also created sweatshops, blood diamonds, monopolies (and increased consumer costs), and union busting.

Profit itself is a dangerous motivation. Profit tempered by morals is what will improve lives. Google is showing that their drive for profits is not being tempered by morals.


The headline is not alarming enough. This is being done without proper consent from patients and the healthcare "innovation" is likely to be inaccessible to many of those same patients due to how unaffordable health care (including preventative care) is in the US.

Edit: It basically doesn't matter what procedure Google's algorithm says you should get. If your insurance doesn't want to cover it, you're not getting it.


Those forms you sign when you go to the doctor include consent to transmit your data to third parties as allowed under HIPAA regulations.


Evidently "consent" isn't lawfully required. The hope is that any breakthrough might lower healthcare costs, and in the worst case scenario at least some patients might benefit which is still a win.


> Evidently "consent" isn't lawfully required.

Ethics != law. Just because it's lawful doesn't mean it's ethical or that we shouldn't be outraged.

Edit: > The hope is that any breakthrough might lower healthcare costs

Perhaps then Google should instead lobby hard for single-payer. Most countries with a single-payer systems have lower health care costs than the US. It's a proven solution that will lower costs, not a "hope".


Is it your claim that only a trivial portion of the $billions in healthcare spending is being spent on healthcare for the average insured people?


> Staffers across Alphabet Inc., Google’s parent, have access to the patient information, documents show...

And now with Fitbit going Google, it appears that Google wants to know everything about you beyond your name and location. Potentially, serving you ads over your Fitbit or prescription recommendations driven by these ads.

If this is what Google calls the "future of healthcare", then they need more luck than ever, as the healthcare industry is extremely regulated when it comes to healthcare records.


I personally suspect the Fitbit acquisition has more to do with making a reliable wearable personal assistant than "ads over Fitbit." Larry and Sergei are still into the idea that we're on the cusp of getting the "everybody has a wearable personal-area-network" part of dystopia cyberpunk futures real soon now, and they want that wearable PAN to be able to track biometrics (for all kinds of reasons; it's just one more signal a personal assistant can tie into).


> Larry and Sergei are still into the idea that we're on the cusp of getting the "everybody has a wearable personal-area-network"

Over my dead body.


Especially if you're into extreme hiking and you end up unconscious in a ditch in the wilderness with no biometric monitor connected to a GPS locator and cellular radio. ;)


When you're into extreme stuff you should take extreme precautions. The chances of me ending up unconscious in a ditch in the wilderness are nil. This wasn't always the case and I'd make sure to plot my route and have check-in points set up in advance which is the way to do that. Your fall into the ditch may destroy your fancy gear and then what? But that stuff certainly won't hurt to bring along on a risky trek (and on plenty that don't look all that risky, the weather can really get you).


Unless your family members starts seeing personalized ads about funeral services and flowers quick enough, they will never suspect you are in urgent need of medical attention.


I don't care if they know. I'm more thinking "The paramedic service that I pay a subscription fee to for LifeAlert-grade monitoring in the event of disturbing vitals."


It's ridiculous to think that Google would use this to serve ads.


Not ridiculous at all. Ads for (for example) heart pills are on TV now, placed at great cost, but currently blaring to an audience of mostly disinterested targets. Imagine being able to cheaply target the subset of those people who are not just part of a certain demographic, but who actually (for example) saw a cardiologist or went to the ER on a heart-attack scare in the last 12 months.


It's not ridiculous because they wouldn't want to do it, it's ridiculous because it would be illegal.


yea right. same thing with facebook asking for phone numbers for two factor. they would NEVER use that for ads, right?


One of the huge differences between Facebook and Google is that Google took care to never use that 2nd-factor for anything except account recovery.


Every interaction with my health provider's website has links to double click and google analytics.

This includes test results, doctor communication, and even the feedback page where I composed and sent my complaint.


That are all kind of unhappy situations it could lead to, depending on just how far off the deep end Google/Alphabet goes.

As an example, lets say they identify 50 specific people who are going to die next week due to (say) a heart attack.

They could decide to only notify those people who are friendly to the Google/Alphabet world view. With the others purposely not being notified.

The point being, stuff like this can be misused and Google/Alphabet is well down the path of doing dodgy stuff already. :(


They use your location data to serve ads. Why wouldn't they use your health care data? Is this a satire comment?


> Neither patients nor doctors have been notified

I guess this isn't a HIPAA violation?


Further down:

... privacy experts said it appeared to be permissible under federal law. That law, the Health Insurance Portability and Accountability Act of 1996, generally allows hospitals to share data with business partners without telling patients, as long as the information is used “only to help the covered entity carry out its health-care functions.”


Thank you and Wow!

That is a very broad statement within HIPAA and subject to interpretation, which leaves a huge door open for marketers to "help covered entities carry out their health-care functions"


HIPAA has lots of loopholes. For example "Covered entities must implement reasonable safeguards to limit incidental, and avoid prohibited, uses and disclosures." There is no requirements to have sound proof walls. If your neighbor in the shared room overhears a conversation with a doctor, too bad.


My doctor's lobby has a sign that says "please wait in line here, to respect the privacy of other patients", about 4 feet from the desk. The desk itself is probably wider than 4 feet.


The point of HIPAA wasn’t really consumer privacy protection. It’s mostly just a side effect.


Marketing is specifically called out in the law as not being a healthcare function. It requires a separate authorization of consent from each patient before any of their data can be used for marketing purposes.


s/marketers/machine learning researchers and healthcare software designers.


Is the "carry out its health-care functions" argument a stretch?

Something is seriously broken with a system that lets them give away your health data without your consent, and the lack of anonymization only compounds the injury.


There is a loophole in HIPAA.

Basically, a hospital can give data to a partner if it has to in order to deliver care. But it was meant for, just as a for instance, your doctor gives a few DICOM studies to GE because they are going to concert on modifying the settings on a CT scanner to do some wiz-bang thing everyone thinks will help the patient. (Maybe even other patients down the line?)

Giving it to Google so they can serve you ads, er, um, I mean, "recommend different treatments" to you, kind of stretches the letter of the law. And definitely breaks the spirit of the law in my opinion.


It's not even really a loophole.

The letters in HIPAA stand for "portability" and "accountability." Privacy doesn't show up anywhere in the acronym. While privacy's importance in healthcare is important and is codified in other laws, HIPAA's whole deal is to try and standardize a system by which those other laws can be followed, not "guarantee consumer privacy" as an end of HIPAA law itself.


It's not really a loophole, it's kind of the whole point of the law. It's purpose was to establish standard, regulated methods for sharing health data between organizations.

And using it to serve you ads would be a clear violation of the law.


> Giving it to Google so they can serve you ads

Nobody is serving ads with this data. Sheesh.


> your doctor gives a few DICOM studies to GE because they are going to concert on modifying the settings on a CT scanner

Is that actually allowed without patient consent?


No. Patient consent is part of the whole "everyone thinks [it] will help" part.

Not really sure how Google finagled their access? It's odd when neither the doctor, nor the patient knows. I wonder if even the hospital administrators knew? Or is this a decision that was made at the system level and the average hospital CEO at Ascension was none the wiser?


I hate how patient consent is typically implemented. I've seen two ER's firsthand who have someone come in with a computer, ask some questions about insurance and all, then say "please sign for the HIPPA privacy form, and again to release the information to your insurance provider".

For all you know, you're signing away for an assisted suicide and authorizing your organs to be transplanted. I refuse to sign on the pad, I'm the patient who asks for it to be printed out on paper, signs it, and gets a copy too.


That's a bit different that the consent referred to above - this is the hospital covering themselves.

If you are consenting data for a trial or otherwise sharing with a company for development, someone (often paid for separately) will typically come and walk you through the consent form.


> It's odd when neither the doctor, nor the patient knows.

This is unusual.


It's wall-to-wall horrifying news about Google here these days


It's almost like your news sources are biased against one of their major competitors.


Or maybe because this supermassive company with its playful logo is up to no good.


It's more like google is a dystopian spyware company trying to suck in as much data as possible. Not that the news sources are great, but google is genuinely far worse.


This is alarmist bullshit. “Dystopian?” Really?


HN is also an echo chamber though.



We really need a second Bill of Rights.


Easy to make that argument when you're not providing any detail and so are giving others no way to respond. What would you put in it, and why?


I haven't studied this, but they do claim they're handling it right:

"All work related to Ascension’s engagement with Google is HIPAA compliant and underpinned by a robust data security and protection effort and adherence to Ascension’s strict requirements for data handling."

https://www.businesswire.com/news/home/20191111005613/en/Asc...

Maybe HIPAA isn't strong enough?


I've seen past instances of articles like this getting flagged here - possibly by a large amount of Googlers/ HR? I'm glad this didn't occur (yet).


This is a compilation and revision of my comments nested deeper in the thread. Let's start from the absurdum argument that was posed:

> There's no incentive for them [Google] to improve health care for the patients

Ignoring the fact that Google is staffed by humans, and humans have a deep visceral response to engaging in healthcare, the healthcare systems they are collaborating with (e.g. Mayo) are strongly motivated to improve health of their patients, particularly in capitated models like accountable care organizations. Mayo is actually pretty famous for adopting the ACO model. Note the author on this article (1) from Mayo Clinic Proceedings is by David Shulkin, who went on to become the Secretary of Veterans Affairs.

You may also be interested to know Ascension is another ACO (2).

So why would an ACO be motivated to work with Google? Because they know reducing diagnostic variance is almost certainly identical to improving quality of diagnosis, which will reduce poor outcomes and reduce malpractice, cost of overtreatment, cost of undertreatment, and so on.

Further, let me present evidence that I believe indicates Google Health is going to be laser-focused on improving care and will actively shed any work that doesn't advance that goal. Not only their business actions, but also their leadership choices.

In terms of business actions, lets look at the NHS brew-haha. Hard to blame Google for NHS screwing up the research protocol that led to the specific 1.6M patient records being transferred to Deep Mind. And they corrected that research process years ago. Do you really think Google wants to be seen anywhere near the mishandling of private information? That presents an existential risk to their business. They passed muster with Mayo (a deal that no doubt had to pass muster with Shulkin among other world-renown physicians and administrators). They have deals with McKesson, Cleveland Clinic, and now Ascension. These are major players.

Their leadership choices give you additional insight on their motives. Their new Chief Health Officer is Karen DeSalvo, former National Coordinator for Health IT and Acting Assistant Secretary of HHS (and no doubt candidate for next Secretary of HHS). David Feinberg, their new VP for Health, is coming from serving as CEO of Geisinger and UCLA prior to that. These people are reputationally allergic to mixing medicine with adtech.

Finally, keep in mind that healthcare is widely regarded as one of the weakest points in Western cyber security. Bringing in grown-ups sounds like a phenomenal move to me.

Would it be nice if the company's core business wasn't adtech? I suppose. But for all the reasons above, I genuinely believe Google getting into this space is a better net outcome than the status quo.

(1) https://www.mayoclinicproceedings.org/article/S0025-6196(12)...

(2) https://www.beckershospitalreview.com/acos-to-know-2019.html


> They passed muster with Mayo (a deal that no doubt had to pass muster with Shulkin among other world-renown physicians and administrators

I used to work at both Mayo and Google.

Mayo isn't shy about the need to protect their name and reputation. They consider it one of their most valuable assets, if not the most valuable. After all, they built it over a century of hard work. Sometimes it borders on excessive. If you went to a conference, unless you were presenting, it was suggested for you to just say you're "from a large medical institution in the Midwest". If you sold them software or services, you were allowed to use their name (trademark) in the list of customers on your website, but only if the list had at least 4-5 names and it was in alphabetical order.

The day Mayo signed, also in light of their past collaboration with IBM (I was around in the BlueGene days...), I knew that Google must have committed large amounts of money and resources to the partnership. For those who are not aware, Dr Plummer at Mayo pretty much came up with the modern idea of a medical record a century ago and even had a sort of human-powered Google for records, keeping them in the basement, calling them up over intercom and delivering them over tubes.


Nightingale? Cuckoo, more like it.


Data being used for services is inevitable, but at this point we really need two things. 1 - Laws around how shared data can be used that are more comprehensive 2 - A secure database either managed by private companies or the government (with strong oversight) that allows companies who want to use the data to access it but not store it. The key is that the companies using the data and the companies storing the data cannot be controlled or owned by the same entity. The companies who store the data also cannot be allowed to make money based on the type of data. They would need to be limited to only monetizing the cost of distributing the data. There will also be need to police that companies do not store the data they have access to.


How can I opt out of this???


Suddenly all the investments into genetics, aging, and biotech from Calico / Verily makes sense.

I imagine Alphabet’s end goal is some kind of intrinsic understanding of every single person on a biological level in order to better target and serve them ads that fit them.


It always tickles my funny bone to see people attempt to tie every Google move to "How do we use this for ads?"

It's the opposite; Google's seeing that their revenue stream is very near a monoculture and they know monocultures can't last forever. A lot of their initiatives (cloud, video, wearables) are trying hard to find an alternate revenue stream that will be as profitable as ads so that when the industry inevitably crashes (not because of anything specific they can predict, but because all industries eventually do) they're not powering their entire empire off of a single river that just dried up.


Sure. Their biggest business outside of search is YouTube. But that doesnt make much money on subscriptions, its ads...


Yep. The goal they're targeting is a huge uphill battle for them. Margins on ads are so good, there's almost nothing that competes.


With Premium and TV, they are making a big push to change the YouTube revenue model away from as dependence.


Don't forget gaming. They're going after the Twitch business model there. Combining that with traditional YouTube and Stadia offers a very robust social gaming platform. They're basically going after digital entertainment all across the board, looking for opportunities to leverage cloud/ai to leapfrog encumbants.


I wish I could say that it caused me humor every time I come across this type of justification for the ever-quickening erosion of the concept of personal privacy in the era of big tech. Here's why my gut reaction to interpreting Google's behaviors starts from the core assumption that they are pursing ever-greater levels of privacy-eroding ad tech in nearly everything they do:

0) Google's only purpose is to make money.

1) Google's strongest strategy (by orders of magnitude) for making money is targeted advertising.

2) Google has surpassed the critical size threshold beyond which morality plays no role in business decisions (see surveillance state-supporting tech in China among other issues).

Due to these facts it appears inevitable that Google will seek in all of its large-scale actions to involve ad tech in whatever it produces or engages in. It's how they fulfill their purpose (selling the most sophisticated targeted advertising possible).


That's what I'm saying. Targeted advertising isn't guaranteed to satisfy rule 0. The industry is under threat from multiple sources, and more importantly, no single industry ever goes forever without encountering a storm.

A naive approach would be to hyper-optimize on ad targeting to pursue rule 0, but Google does its best to not be naive. The company's ownership model is designed to target the long game, which is what they seem to tend to do in their decisionmaking much of the time.

That isn't to say that advertising doesn't factor in or that they definitely won't ever use biometrics for ad data, merely that "We could use this for ads" probably doesn't by itself justify buying Fitbit. But potentially uses in advertising and Assistant might.


How can advertisements ever die in a capitalist society? Anyone who wants to sell anything buys ads. That's how the cycle works.

Sure, they can morph into different mediums and appear in different contexts, but that doesn't make them "not ads."


Agreed; advertising can be regulated but doesn't go away.

Online ads specifically have enjoyed premium value over even "traditional" print and television ads because of highly-specific microtargeting; there's a combination of demographic data and ad targeting that an online service can use to offer expectations that you're paying specifically for the high-value slice of people you want to reach. It's an expectation that print and TV (being wide-cast, non-customized experiences) have difficulty offering.

Various phenomena (market regulation, the Facebook / Twitter data walled gardens, the proliferation of ad- and script-blockers) threaten the guarantees that online advertising has been able to make, and it's unclear that Google's high-cost-of-operation empire survives unscathed if money flows away from premium online ads (either by advertisers going to alternate channels or by advertisers saying "Your value-add claims are bunk and I'm not going to pay for them").


How about the fact that health care in the US is ripe for change? We spend a lot of money only to have the insurance company dictate the course of action, rather than the doctors on the front line.


Why do you think Google's intervention will result in doctor autonomy and not rule by algorithm?


Doctors that I know resent their lack of autonomy. They are executing algorithms, generally against their will. They know the patient will get over their cold but they want a pill. They over test to avoid litigation. They treat symptoms not causes.

Not because they think that's the right course of action but rather because they'll get fired if they don't.


Isn't "rule by algorithm" kind of what we have now anyway?


Specifically pharmaceutical products. "This pill you don't know about will stave off a genetic condition you don't know you have."

Cognitive dissonance overwhelms me.


That actually sounds pretty useful if they can get away from the problem of conflating correlation for causation, but it's probably not a problem you can solve with big data alone for exactly that reason.


What are you trying to say. Won't that at least be a beneficial outcome of having all this data being mined?


Aaand your deductible has now tripled, enjoy being broke and temporarily healthy. :) (please don't tell me that googlenet would never sell/barter/exchange this data or analyzed results based on it).


> please don't tell me that googlenet would never sell/barter/exchange this data or analyzed results based on it

Why would they be any less trustworthy than the hospitals or insurance companies who normally has this data?


First and main point - currently it is either hospitals+insurance have this data or hospitals+insurance+google. Even without paranoia or conspiracies it is logical that N entities is more trustworthy than N+1. And another thing - hospitals are much more regulated (insurance - I don't know) by government and due to their byzantine structure and ancient IT systems it is harder to do mass extraction of data. One patients data - easy, a small bribe to one or two humans and it's yours. Data on million of patients - much harder. Google on the other hand has means AND motive to do anything with any amount of personal data. They can set high frequency trading with it if they want. Nothing to stop them, zero regulation, offshore management, offshore funding.


Perhaps you would benefit from reading up on what a Business Associate under HIPAA is.


Awesome! We need grand innovation in the health records space if we truly want to help people be healthier. I hope Google does something great with this plus Fitbit. If anyone is interested in our research in this space, we are aiming to radically improve medical records and healthcare delivery/research as well. We don’t have the resources of Google but everything we’re doing is open and public domain: https://pau.treenotation.org


I completely agree, I work in healthcare data analytics and this is the first step.

My hope would be that the government can step in and make your emr (electronic medical record) something akin to your SSN. It is your data and shouldn't belong to the provider.

It would require an institutional body/government to standardize the EMR and centralize it, allow the individual to share their EMR key to whom they wish (Google, your company, etc) to help provide analytics to tailoring health care to the individual.

Wishful thinking, but I hope Google is able to make significant outcomes with the data to show it is possible.


Awesome! We are on the same page on a lot of things.

> It is your data and shouldn't belong to the provider.

Agreed! What do you think of the concept of the "Copy/Paste Test"? The idea is a good EHR should allow you to copy/paste your entire medical history into an email in a non-fragile way. If it can't do that, it's not a good EHR. We think this one dimension encompasses a lot of sub dimensions of what goes into a well designed EHR system.

> It would require an institutional body/government to standardize the EMR and centralize it

We're thinking the opposite: decentralized, git backed, concatenative grammars. You eventually probably would indeed have one grammar rise to the top, but the idea is to allow anyone to view, suggest edits, and fork the collection of grammars. Here is the current collection of Pau Grammar files: https://github.com/treenotation/pau/tree/master/grams. Note: this isn't even v1 yet, but the core ideas are there.


> The idea is a good EHR should allow you to copy/paste your entire medical history into an email in a non-fragile way.

On the surface, this seems to be a poor test. Are you using it as a proxy for a patient to be able to get their EHR records out or something else?


> Are you using it as a proxy for a patient to be able to get their EHR records out or something else?

It's a proxy to test a lot of dimensions at once: not only how accessible the records are to the patient (facilitating care, particularly in acute settings), but also how well designed the grammar and schemas are. Well designed grammars and schemas should survive copy/pasting easily. Any errors should be quickly and readily identified with the potential for autocorrections.

VA created a very cool thing called Blue Button (https://www.va.gov/bluebutton/). It is a step toward passing the copy/paste test. Any veteran can download their complete medical history in a single file. The schema isn't there yet and parsing these things is a pain, but a step in the right direction.


Ok, that makes more sense.

I guess initially I would think of this as a round-trip requirement. In theory at least I should be able to download the entirety of my history (in an appropriate format); delete the record on the EHR; re-upload my history and end up with the identical EHR record mutatis mutandis.

Sticking "email" in there had me thinking your were focused on transmission.


> I guess initially I would think of this as a round-trip requirement. In theory at least I should be able to download the entirety of my history (in an appropriate format); delete the record on the EHR; re-upload my history and end up with the identical EHR record mutatis mutandis.

I like this test! Yes, passing the copy/paste test should pass this test as well.


The core of this idea is good; The tricky part about it is both technical and legal impediments to abuse. Specifically as an individual I should be able to share medical history with you (individual/corporation/government/whatever) in a way that is completely granular (I decide what you see and don't), revocable, and non-transferable.


> I should be able to share medical history with you (individual/corporation/government/whatever) in a way that is completely granular (I decide what you see and don't), revocable, and non-transferable.

I agree 100% with the granular.

For "revocable, and non-transferable" I'm not sure how you would do that without making it not worth the trouble, but maybe I'm just not seeing something.

One thing I think solves a number of problems is what we call the "synthesize test". An EHR should be able to synthesize as many medical records as a researcher needs with 1 click. Then you could design software against lots of synthesized records, and move the compute of actual real patient records to the edge, on their devices. Maybe you could do something like that for the "revocable/non-transferable" thing. Patients submit their records to a machine with a ticking expiration, that does the training and emits the trained model and destroys the training data (assuming we can prevent leakage of sensitive info into the model).


re: revocation, non-transferable, etc.

What I am suggesting is that data/learning companies need to start thinking of this sort of data differently, that what you are negotiating is access to the data under certain terms not ownership of the data.

If the individuals retain ownership, we the company should not be able to say sell that access on to a third party (c.f. Cambridge Analytics), or use it in a different way than negotiated for. If the company does not live up to the terms negotiated, the individual should be able to revoke the permission. There are various ways to do this, one being something like you describe.


But in that case, it _really_ belongs to the government, not to you. "Your" SSN isn't yours; the government "owns" the number and uses it to uniquely identify you. I trust my medical records being on paper at my house, not in someone else's database (be it owned by a government or by a company).


Out of curiosity, how does that work when your doctor needs to treat you? And if you suffer cranial trauma and can't remember where you keep those medical records, do you just get worse care from that point forward because your healtcare provider lacks access to your history?


Trusted family members have access to my information, too. I'll admit it doesn't work perfectly if one lives on his own without family.


That doesn't really help in an emergency. You could be dead from a drug interaction before they even know you're in a hospital.


I mean, the same situation could happen even with medical records, unless you give every hospital instant access to everyone's medical info. I assume you're referring to the sort of situation where one is taken to the ER for emergency treatment; even were an electronic record system to be implemented, how would the hospital obtain records from an unconscious person? That's absolutely the sort of situation where doctors must try emergency treatment and have to risk an adverse reaction. I acknowledge there are flaws to the system, but I fail to see how an electronic system mitigates those flaws.


> how would the hospital obtain records from an unconscious person?

In most developed countries, they pull his ID out of his wallet and type it into the nearest computer. National healthcare has many advantages.

sukilot 24 days ago [flagged]

Claiming that your file format is anywhere near relevant to the quality of health data processing cases suspicion on your competence in both file formats and health records.

Please stop spamming HN to promote your project. We've all seen it and decided whether we are interested.


[flagged]


I actually enjoyed the parts of your comment that did not include the personal attacks. Overall they weaken your stance and make your arguments feel less genuine. I think you have some really good points otherwise. Consider updating this comment (and ones in the future) to use a more mature tone, and I would imagine you would garner more interest in both your product and your goals.


Yeah I agree. Sometimes I'm too rushed to be nice! I've pitched HN a lot on perhaps creating a more restrictive grammar to help keep conversations constructive. I saw that Grammarly actually rolled out something new that shows a simple smiley/frown based upon the town of the content in your textarea.

IRL I'm not an argumentative person, but I consider coding/posting on forums to be my equivalent of the gridiron, and I hope people don't take offense when I fight hard on the field. I'm just hoping we can all get to the correct answers the quickest, and so if people think I'm wrong, please provide data or detailed explanations as to why, or else be prepared to take some heat.


Reinventing YAML badly, with little or no apparent work done toward buy-in from the enormous ecosystem of medical providers whose existing EHR providers you would need to displace in order to make a market for your new thing, seems like a remarkably inefficient way to use time.


Please don't be a jerk on HN. Thoughtful critique is welcome, but shallow dismissals and putdowns like "Reinventing YAML badly" are not. Your comment contains some good information but the way it started and ended overwhelm it with badness. It would be much better if you had taken those bits out and added more good information instead.

https://news.ycombinator.com/newsguidelines.html

breck 24 days ago [flagged]

> Reinventing YAML badly

Let's go. Put your money where you mouth is. Here's my bet: http://longbets.org/793/. Let's see your argument about why YAML is better. YAML is full of unnecessary complexity. We have immense research to back that up. You want to argue do some actual work.

> with little or no apparent work done toward buy-in from the enormous ecosystem of medical provider

Let's see, the past 4 weeks we had meetings with 3 people from NIH, 2 from OHDSI, talked with 5 different healthtech startups, had meetings with researchers from 3 different top tier institutions about collaborating for R1 Feb cycle, and had meetings with researchers from 4 different countries, was on a panel on medical tourism speaking about the importance of portable medical records....you were saying? You might not want to make assumptions because you make yourself look like an ass.


Please don't respond to a bad comment by breaking the site guidelines yourself, even if it was provocative.

https://news.ycombinator.com/newsguidelines.html


You're right, I apologize and need to stick to the guidelines better. Thanks.


A prediction as to how many tree languages will be in the TIOBE top-ten eight years from now doesn't change the technical merit of your project. You have failed to justify why you feel the need to create a new grammar rather than using JSON, YAML, or TOML (my preference). Your project necessitates re-implementing parsers, viewers, and all the other tooling that goes into a new grammar; this is quite an ask for a domain-specific language with no traction.

You say you have had meetings, yet that's not actually buy-in. People will take your idea more seriously when you have actual adoption.

Oh and with respect to your second possibility on the long bets link, you really think C, Java, Python, SQL, PHP, JavaScript, and C++ will all fall out of the top 10? Break that out into a separate bet and you'll have people lining up to bet against it.


> A prediction as to how many tree languages will be in the TIOBE top-ten eight years from now doesn't change the technical merit of your project.

Agreed. What is does say is I have a lot of confidence in the technical merits. If someone spots a flaw in the technical merits, I'm all ears (i.e. prove why 2-D or 3-D languages are inferior to 1-D languages). If someone can show me a data structure that can be more efficiently (fewer parts) represented by a 1-D language—that would be great and a great argument against 2-D languages. But that hasn't happened yet, which makes me more confident in the technical merits.

> Your project necessitates re-implementing parsers, viewers, and all the other tooling that goes into a new grammar;

It's a lot of work and investment, I know. I'm funding a lot of it myself. But the potential rewards for the world are vast. Sometimes you have to do things that are hard. But the bet is that because this is simpler, the effects will compound. And the evidence so far is pointing to that. For example, the Grammar Tree Language now gives you parsers, highlighters, type checkers, visualizers, synthesizers, go to definition(new this week--thanks ZK!) etc, for a new Tree Language, in very few lines of code relative to existing 1-D languages. If you agree with the statement "software is eating the world", and I'm correct that 2/3 dimensional languages are a better type of software, then I think this could create far north of $1 Trillion worth of value annually for the world. More importantly to me it can help revolutionize medical records and then both medical research and healthcare delivery in the process.

> You say you have had meetings, yet that's not actually buy-in.

I agree! I wish I had more buy-in but it's still very early and we are working on it. The parent comment said "with little or no apparent work done toward buy-in" (emphasis added). We are indeed working toward buy-in.

> Oh and with respect to your second possibility on the long bets link, you really think C, Java, Python, SQL, PHP, JavaScript, and C++ will all fall out of the top 10?

Yes. If not from Tree Languages, from someone else's novel non 1-D languages. This isn't some shooting from the hip either, btw. We've built the world's largest database of programming languages and notations on the planet (over 10k languages, over 1k columns), so we can forecast and simulate the future.


How to opt out?


Move to another country?


Another planet more like


Just like hackers that violate people’s privacy, Google is in the same business, only under the disguise of “ terms of service “ and other legal shields. It follows us daily from website to website, from location data to usage data, and on on. Surveillance capitalism is contrary to our values.


Google's core business and overriding concern is voyeurism and stalking. They're creepy as hell. It should be framed in those, accurate, terms at every opportunity. It doesn't matter why they're doing it—those are the actions they're taking. They're creepy, voyeuristic stalkers, to the core. It's a vital part of their income stream, inseparable from and often motivating everything else they do.

I mean granted most of the other tech giants aren't far off from them. But that just means none of them should get a pass on that crap.


Remember to include every business that buys ads from Google as well.


"Our values" aren't universal. I benefit from it on the whole; it's okay by me.


You would if you held views or had elements of your personal life exposed that were contrary to mainstream thought. Complete data about a person is all that's needed to create the most pervasive fascist society that the world has ever seen. And all it takes is us saying we value the convenience of having our data shared.

Sure Google has mostly benign reasons for wanting this data, but putting it in a big warehouse means it's a juicy target for anyone who wants to oppress you for any reason. There's a reason the secret police in the USSR had records about everyone. Because the only way to really effectively guarantee compliance is to ensure that you know exactly what the subject is doing at all hours of the day, even when they think they're in private.

The shitty thing about this is there's effectively no way to opt out. Just being alive in this time is enough for these companies to gather an unbelievably creepy amount of information about you.


Exposure of private data is counter to surveillance capitalism. The "capitalism" part is scarcity of aggregated data without value to be gained for data exchange; the company knows something about a person that other companies don't, and that has competitive value.

It's not "surveillance capitalism" to collect and then publish a pile of damaging data on a person; it's just sloppy, and makes people less inclined to do business with the company that screwed up (on average).

I trust Google specifically with my data because Google has some real good incentives to keep it private.

Fascism is fascism, and I find it interesting (in a cognitive-dissonance sense) how many of my European peers are leery of data collection by a Google but are extremely comfortable with their respective governments running universal healthcare systems. Aren't those governments one fascist dictator away from all that centralized government health data being used to drive a mass extermination? It sounds a lot like people's real disagreement is on choice of master of the data, not whether data aggregation has utility to them.


The PHI you're speaking of is something that doctors generally don't sell. That's pretty different from what Google wants to do with the data.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: