This phenomenon goes way deeper than people not caring about privacy. It underlies everything from the bystander effect , to the Nuremberg defence , and everything in between.
People just don’t want to organize, generally speaking. The people who do usually end up as part of the elite. The rest are just focused on their individual lives.
They used to far more. The eighties and nineties disassembled many of the places and means to organising. Disassembly of civic community, unions, restrictions on demos, decline of social societies such as Lions Clubs and others. Now we're all individuals, organising alone.
"For an entire century, the corporate class in America has dreamed of destroying organized labor, which eats into their ability to amass endless profits. They’ve chipped away at unions for 40 years. Now, it is no exaggeration to say that they are two steps away from total victory."
"This is a very scary time. Take it seriously."
This doesn't sound too bad, until you see what happens after decades of rot. The American education system, according to anyone who's ever tried to fix it, is legally unfixable so long as the union exists. That union can block any change from happening, and they block everything that would threaten any teacher. The reason we don't hear more about this is because rich people send their kids to private schools. I'm not saying the education system is simple and eliminating this union will magically create the best education system on Earth, but I definitely believe they single-handedly share more responsibility than everything else combined, including poor funding for public schools.
While it's obviously trying to convey a message, I'd strongly recommend the documentary Waiting For Superman.
My message is just a link and the quote from there and it was just that since the beginning, so do you really think I've edited anything that changed the meaning the message, or is that "edit" of yours unrelated to my message?
I realize this is HN and we are expected/encouraged to interpret people's meanings with the best possible intent, but rereading that edit I can understand why you would feel otherwise.
I'm sorry, but in this case I'm absolutely sure there was never anything but the quote there, and the confusion is due to some other comment.
> it could be entirely coincidental
is it's possible someone edits a comment before they see there's a reply to it (this is pretty easy with HN's interface).
At first I was frustrated because I thought your comment had been edited after I replied to it, and then my second comment was an apology for erroneously believing it had been changed.
I apologize again for the confusion.
For years I use "delay" in my HN user settings:
(Just to point to those that missed that possibility).
and the fiasco that started that conjecture was `Kitty Genovese` murder, which was repeatedly reported as well. the police just didnt care enough to actually dispatch someone at that day. or are you insinuating that random people from the street should interfere if there is an armed perpetrator running around knifing people?
or `Raymond Zack`, ...how do you expect random people to actually do something? he wanted to kill himself. if he wanted to get out of the water, he couldve done it within the 60 minutes he was just standing in the water. and after he collapsed someone even went in. despite the fact that its incredibly dangerous to try to help someone who is actively trying to kill himself. if that doesnt disprove the conjecture by itself, idk what could.
each and every example is just showing that there are tragedies which the police arent able to handle. that doesn't mean that the bystander effect is actually real.
If all of us want to be completely different from all the others we don't have many things in common anymore and there is no reason to unionize or fight for the same goals.
Every little protest or group can then be shut down or silenced just by stating how they are just representing a really tiny minority of all the people.
I know this could spark some debate but I think it is a point where Marx was right all along. It's about the things we have in common that help us to collaborate to tackle our problems and we are the strongest when working together.
I've gotten cynical as well as I feel like constantly talking to the bus full of the people who are interested in their freedom which just doesn't arrive somewhere.
As an aside, it is not just Deepmind/Google, all the players are trying to get all the EHR data around the world. The NHS is particularly attractive due to the volume of historical data of course. Its no secret IBM Watson is working with NHS Wales although I don't know what data sharing agreement is in place.
That's fine so long as it goes nowhere else - it's perfectly reasonable for the NHS to have AI models to do their thing, just as the Met Office and others do. Remember years ago Google had a Search Appliance? A rack mount server to put in your business. The NHS should be doing the same with IBM, Deepmind and other models - figure a way to licence and use the technology without sharing any data back whatsoever.
Data for training, helping IBM et al building better models should be 120 years old or more - guaranteed not to contain anyone alive. The restrictions on medical privacy are there for very good reasons, though I tend to the view they don't go nearly far enough for the modern world.
Only way is to completly anonymize data, which is way harder than it sounds.
What's complicated about it? So long as it's not attached to a name, social security number, or some other personally identifiable marker - how would anybody know it's you? You're just an anonymous collection of data points.
I understand whatever fingerprint they use for your data is almost definitely completely unique, but that doesn't mean it can be associated with the corresponding person unless you allow that to happen. I mean I get that it's obviously not trivial from a security perspective, but what's the main blockade here?
What's complicated is Google and others are extreme data hoarders. Google can get the anonymous health records and run a fuzzy match of model records against Home, URL and gmail records. Find high confidence correlations against multiple conditions across the years, where the search preceded new medical entries for those conditions within x time. Now they're in this data set for condition y and oh lookie, they searched condition y related topics multiple times last year. Anonymous record i1389541 is Alice Smith of Wimbledon, set her adwords flags to condition y snake oil, sell record to UK life insurance consortium. [repeat]. No personal data was released, but you're fully de-anonymised.
With enough data and history it becomes absurdly easy to decloak the majority with high or complete confidence.
Welcome to the future.
The alternative is find a way to prevent those uniquely identifying correlations at any point in the future, using techniques we haven't yet thought of. Which is probably impossible without diverging the anonymous medical record some considerable distance from reality. Faked enough to remain anonymous might not be helpful for research into a condition or its diagnosis...
That said, that's completely different from what I implied in my original comment, so point taken.
If I'm exaggerating it's from trying to make up a credible example off the cuff. When the Snowden revelations were coming thick and fast I remember several examples coming up around search stream and meta-data. Mostly terrifying as either could reveal in far more detail than actual content of messages what you appeared to be thinking in the moment. Tie that with medical and health data and any pretence of privacy falls.
The key insight[ß] was that privacy researchers are essentially rediscovering the information-theoretical deanonymisation attacks that have been known to advanced marketing departments for the past two decades.
0: Evgenios Kornaropoulos, UC Berkeley: "Attacks on Encrypted Databases Beyond the Uniform Query Distribution"
ß: anyone present at the last talk should recognise the term used.
On the other hand personally I wish my medical history could be scanned and screened for some preventable terminal diseases by some AI (with some form of consent though preferably).
That being said, I would also respect someone who said that it is nice to see Google aid in medical research and see some of the best minds of ML allowed to access high quality data to develop these solutions.
This was not a Google ad network integrating medical records in order to target the sale of medicin... It was a medical research project.
I'm going to get fired up once someone gets hurt. I know someone is going to call slippery slope, but until someone can prove harm (beyond the broken trust and principle of data sharing), I'm going to save my pitch fork for bigger problems.
How many females in a (county|country|postcode} born on 5 March 1942 are there with a kidney problem? Particularly in the UK where a postcode, even the locality selector (the first half, e.g. M50, NW11) is more precise than many US zipcodes that can cover many square miles. A full UK postcode narrows it down to within the street - a dozen or two houses usually. What if there is associated medical data like medications or chronic conditions - diabetes, arthritis etc?
To make it suitable for study a) keep it within the NHS or relevant health authority - perhaps allowing an airgapped DeepMind server to come in process locally and afterwards get wiped or shredded, or b) remove all personal data including gender, dates, associated medical info of any type - i.e. render it almost useless for study.
Oh and c) imprison all the management who agreed to share this data without patient consent and in breach of DPA, and require Google to remove all trace under audit.
Google will do what they have always done, ignore privacy, train models on full data, apologize later ans delete data when no longer needed.
You dont know this, but people at Google are Gangsters and bullies
They dont give a hoot about laws in Europe, about privacy or any of that. They think they are right and that everyone else is wrong.
See the guy above who said this is for the collective good, even though everyone who knows even a little bit stats and machine learning knows full well that these sort of sensitive data will lead to unimaginable grief for outlier patients. In particular, because Google cares about money, and not the collective good!
Everyone should be outraged, it is the only rational reponse.
"proper anonymisation" is a set of goalposts that keep moving - it's like password length or bcrypt work factor. What's enough today won't be enough with CPU, or in a year or three, or perhaps a suitably fancy AI from the world's largest data hog. Proper anonymisation likely requires all associated data - any dates, blood group, medications, chronic conditions etc removing, hence mention of making it so bland as to be almost useless for study.
Was apparently to get the team that could get this sweeping level of data: https://www.businessinsider.com/google-deepmind-sets-up-heal...
The Hark team (https://www.harkapp.co.uk) brought huge influence in the NHS and at the national level:
Because I let users know I'm not going to spam them with that shit at those that enacted that law can go to hell for every site I visit having it. It's a good thing but really annoying. There has to be a better, more interesting way.
Your browser controls your cookies - use that, clear them, do whatever you want.
The 'Cookie Law' is the 2011 ePrivacy Directive. it will be replaced by the ePrivacy Regulation (eventually). https://www.cookielaw.org/the-cookie-law/
The modern privacy law is GDPR, which came in in 2018. Pre-dating that were the somewhat country-specific interpretations of the 1995 Data Protection Directive.
Google's launched healthcare products are here: https://cloud.google.com/solutions/healthcare/
It's hard to say what's going on in the WSJ article, since it has no details.
> The data involved in the initiative encompasses lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, including patient names and dates of birth.
Maybe a HIPAA expert can chime in but I'm reasonably sure it will be a violation to show you ads on your devices based on your personal healthcare data that Google obtained without your explicit consent. It might however be legal if Google uses all the patients healthcare data to train a model that it can use on all users, though.
"The Privacy Rule allows covered providers and health plans to disclose protected health information to these “business associates” if the providers or plans obtain satisfactory assurances that the business associate will use the information only for the purposes for which it was engaged by the covered entity, will safeguard the information from misuse, and will help the covered entity comply with some of the covered entity’s duties under the Privacy Rule."
Using the data to create a model for use at the providers would be a clearly proper use of the data. That's standard practice across the industry.
Why does Google get PII?
They'd be legally liable. Same as those they sign BAA with.
No, companies almost never manage the claims themselves. They underwrite the plans, but managing claims is cost-prohibitive, as it's outside the core competency of most large companies.
Self-insured doesn't mean they manage claims themselves. For most companies, managing claims would be cost-prohibitive and an absurdly wasteful use of resources.
> That doesn’t mean people in HR and Finance don’t have visibility into claims data.
I mean, this is technically a true statement - "A does not necessarily imply not-B" - but kind of irrelevant, because as it turns out, HR and finance generally do not have visibility into individual-level claims data (as opposed to aggregate data, which is necessary for underwriting).
I'm not debating with you but somehow, Tim Armstrong (then CEO of AOL) found out about healthcare claims for one of his employees. The story isn't clear on whether the granularity of the claims data would reveal to Tim who the particular employee was. (Even if Tim didn't know the exact employee, I'm sure he could ask a few questions and/or look at employees' sick day records to figure out which employee it was. If the company pays $1 million in a health claim, that's not necessarily going to stay a secret.)
I’m not suggesting that there’s a team of employees watching you every time you pick up a prescription, but of course they could access an individual claim if there was a legitimate reason to. Not sure why it’s so unbelievable to think that an insurer would be legally barred from doing so if they are literally taking on the risk themselves, even if it is to audit the TPA to make sure they aren’t stealing from them.
If you are an extremely ill employee and you’re using a lot of healthcare your employer will know without you needing to inform them.
For the most part people can't seem to distinguish claims data vs healthcare records...its pretty scary as these are the most likely people to receive funding to "disrupt" the health care industry.
Claims data is not equal to medical records, but to your point it is extremely valuable data and can easily act as a proxy for medical care. Even more so when tied together with Rx claims data. Even providers (Doctors/hospitals) themselves don't have access to insurance claims data for patients...which unironically results in perhaps a million hospitalizations and billions of dollars in healthcare costs per year because of the lack of data sharing with providers.
With that said, we actually do have access to claims data for a significant chunk of our patients now, and that percentage is growing fast. It's part of a big push towards shifting risk from payors to providers to incentivize cost reduction.
This is probably the lowest hanging fruit of all waste in Medicare. Consider the average Medicare patient has 7 prescribing physicians and 10 prescription therapies, yet not a single physician has access to the claims data to see what another physician has prescribed, leading to duplicate therapies and adverse drug interacts at ridiculous rates. These are generally for chronic care patients also, meaning complications with diabetes, blood pressure, cholesterol, etc... resulting in costly hospitalization.
MTM was a step to fixing these issues, I am guessing perhaps what you may be referencing in improvement in sharing claims data (maybe you use OutcomesMTM), but realistically MTM is just a program for insurers to monetize their claims data.
Even if they all know the difference between claims data and healthcare records, that wouldn't show up in the commentariat at large.
Your fear is unfounded.
It is such a relief to read this position on Hacker News, for once. I am continuously astonished by tech workers' refusal to recognize these conflicts between our interests as individuals, and our employers' interests in profitability. Work only gets done when these interests find an equilibrium.
I am especially concerned with how little my employer considers my long-term health. With tech job turnover rates universally understood to be a handful of years, I know my employer is unmotivated to maintain my long-term health, preferring to accrue "tech debt" in me, a jettisonable unit.
Our vote applies immediately, not to mention there are term limits, so I trust the government much more also.
If you have to trust the government on this anyway, cutting private insurers out seems to be minimizing risk, no?
The government plays almost no role in keeping most health data confidential, since most PHI is held by private entities.
And it's not like the government has a great track record of what little PHI it is responsible for - there have been plenty of breaches against CMS and Medicare affecting large numbers of patients.
Why would for-profit companies keep the data confidential when they can make more money selling it?
The fact that the government has passed a law that requires private entities to keep PHI confidential says nothing about their competence in managing PHI of their own. The original claim is about the latter.
Edit: More damningly, if my employer has it multiple governments potentially have access to it. Companies generally aren't willing to stand up to requests from, for example, China.
"The government" isn't a single entity, and most of "the government" can't access your data via a subpoena any more easily than a private company can.
Basically it gives them access to more private healthcare options. If they don't get timely access, they are covered under any network.
> "We are offering them choice in the medical marketplace, and we now have, thanks to the president and thanks to choice, the highest veterans satisfaction rate in our history. We're sitting at about 89.7 percent"
- Robert Wilkie, U.S. Veterans Affairs Secretary
Huge companies like SAP or Microsoft or Oracle will do this, even though they may also hire a traditional branded health insurer to "manage" this large-bank-account health plan (i.e. give out cards, negotiate with hospitals, etc.). But the bank account (brokerage account, really) remains in control of the corporation, and the traditional plan premiums are not paid to the health insurance company named on the card. Fees for managing the plan are paid separately. Doctors and hospitals get paid from the corporation's account. This is what the parent means when they say that the corporation has visibility into employees' healthcare usage.
And while these companies do have access to detailed claim data in theory, in practice access to this data is typically heavily limited a small number of HR or finance employees who are responsible for healthcare accounting and financial management and the data presented is typically restricted to a large claim report within a monthly/quarterly reporting period.
>>"Covered entities may disclose protected health information to an entity in its role as a business associate only to help the covered entity carry out its health care functions – not for the business associate’s independent use or purposes, except as needed for the proper management and administration of the business associate."
BAA's keep the chain of HIPAA in place but just because a BAA is in place does not allow for violation of privacy.
What you're saying simply is not true.
Do you have any source for your suggestions, or is this just a random hypothetical?
Self-insured plans means that the companies underwrite their own plans, but they almost never manage their own claims. That would be cost-prohibitive and absurdly disadvantageous. The employer never touches raw claims data at all.
Companies have access to aggregate claims data, but they absolutely do not have "complete visibility into every health care interaction you have while employed".
However, the employer would not have your medical records without consent of the employee, this works for self funded or fully insured.
But that's the case for all tech coverage nowadays.
Without restraint, these efforts will undeniably bolster their ability to conduct targeted advertisements, which is bad. Hopefully regulation and consumer backlash will take care of that.
But you cannot deny the amount of good we stand to gain from an advanced electronic health system. The insights that can be generated from cross correlation of all patient data, the amount of preventative measures that can be taken from detecting problems before they become bigger problems... The benefits far outweigh the negatives. It's unthinkable to have all this data and NOT attempt to find insights in them.
Not one that has anything to do with Google though.
I'd like to think that Google is concerned about more than just profit, but they prove again (censored search for China) and again (YouTube policy changes) and again that they are not. Profit rules Google's decisions.
And that is why there's such backlash against Google getting this data. Because the chances of anything good for us as patients is slim to non-existent.
The healthcare systems they are collaborating with (e.g. Mayo) are strongly motivated to improve health of their patients, particularly in capitated models like accountable care organizations. Mayo is actually pretty famous for adopting the ACO model. Note the author on this article (1) from Mayo Clinic Proceedings is by David Shulkin, who went on to become the Secretary of Veterans Affairs.
You may also be interested to know Ascension is another ACO (2).
So why would an ACO be motivated to work with Google? Because they know reducing diagnostic variance is almost certainly identical to improving quality of diagnosis, which will reduce poor outcomes and reduce malpractice, cost of overtreatment, cost of undertreatment, and so on.
Do we have any assurances that Google's efforts will be limited to improving care? Or are they being compensated by being able to use that data in other opportunities?
In terms of business actions, lets look at the NHS brew-haha. Hard to blame Google for NHS screwing up the research protocol that led to the specific 1.6M patient records being transferred to Deep Mind. And they corrected the research process years ago. They passed muster with Mayo (a deal that no doubt had to pass muster with Shulkin among other world-renown physicians and administrators). They have deals with McKesson, Cleveland Clinic, and now Ascension. These are major players.
Their leadership choices give you additional insight on their motives. Their new Chief Health Officer is Karen DeSalvo, former National Coordinator for Health IT and Acting Assistant Secretary of HHS (and no doubt candidate for next Secretary of HHS). David Feinberg, their new VP for Health, is coming from serving as CEO of Geisinger and UCLA prior to that. These people are reputationally allergic to mixing medicine with adtech.
And there is nothing wrong about monetary profit as motivation, the pursuit of profits has elevated the living standards of billions around the world saving countless lives.
Profit itself is a dangerous motivation. Profit tempered by morals is what will improve lives. Google is showing that their drive for profits is not being tempered by morals.
Edit: It basically doesn't matter what procedure Google's algorithm says you should get. If your insurance doesn't want to cover it, you're not getting it.
Ethics != law. Just because it's lawful doesn't mean it's ethical or that we shouldn't be outraged.
> The hope is that any breakthrough might lower healthcare costs
Perhaps then Google should instead lobby hard for single-payer. Most countries with a single-payer systems have lower health care costs than the US. It's a proven solution that will lower costs, not a "hope".
And now with Fitbit going Google, it appears that Google wants to know everything about you beyond your name and location. Potentially, serving you ads over your Fitbit or prescription recommendations driven by these ads.
If this is what Google calls the "future of healthcare", then they need more luck than ever, as the healthcare industry is extremely regulated when it comes to healthcare records.
Over my dead body.
This includes test results, doctor communication, and even the feedback page where I composed and sent my complaint.
As an example, lets say they identify 50 specific people who are going to die next week due to (say) a heart attack.
They could decide to only notify those people who are friendly to the Google/Alphabet world view. With the others purposely not being notified.
The point being, stuff like this can be misused and Google/Alphabet is well down the path of doing dodgy stuff already. :(
I guess this isn't a HIPAA violation?
... privacy experts said it appeared to be permissible under federal law. That law, the Health Insurance Portability and Accountability Act of 1996, generally allows hospitals to share data with business partners without telling patients, as long as the information is used “only to help the covered entity carry out its health-care functions.”
That is a very broad statement within HIPAA and subject to interpretation, which leaves a huge door open for marketers to "help covered entities carry out their health-care functions"
Something is seriously broken with a system that lets them give away your health data without your consent, and the lack of anonymization only compounds the injury.
Basically, a hospital can give data to a partner if it has to in order to deliver care. But it was meant for, just as a for instance, your doctor gives a few DICOM studies to GE because they are going to concert on modifying the settings on a CT scanner to do some wiz-bang thing everyone thinks will help the patient. (Maybe even other patients down the line?)
Giving it to Google so they can serve you ads, er, um, I mean, "recommend different treatments" to you, kind of stretches the letter of the law. And definitely breaks the spirit of the law in my opinion.
The letters in HIPAA stand for "portability" and "accountability." Privacy doesn't show up anywhere in the acronym. While privacy's importance in healthcare is important and is codified in other laws, HIPAA's whole deal is to try and standardize a system by which those other laws can be followed, not "guarantee consumer privacy" as an end of HIPAA law itself.
And using it to serve you ads would be a clear violation of the law.
Nobody is serving ads with this data. Sheesh.
Is that actually allowed without patient consent?
Not really sure how Google finagled their access? It's odd when neither the doctor, nor the patient knows. I wonder if even the hospital administrators knew? Or is this a decision that was made at the system level and the average hospital CEO at Ascension was none the wiser?
For all you know, you're signing away for an assisted suicide and authorizing your organs to be transplanted. I refuse to sign on the pad, I'm the patient who asks for it to be printed out on paper, signs it, and gets a copy too.
If you are consenting data for a trial or otherwise sharing with a company for development, someone (often paid for separately) will typically come and walk you through the consent form.
This is unusual.
"All work related to Ascension’s engagement with Google is HIPAA compliant and underpinned by a robust data security and protection effort and adherence to Ascension’s strict requirements for data handling."
Maybe HIPAA isn't strong enough?
> There's no incentive for them [Google] to improve health care for the patients
Ignoring the fact that Google is staffed by humans, and humans have a deep visceral response to engaging in healthcare, the healthcare systems they are collaborating with (e.g. Mayo) are strongly motivated to improve health of their patients, particularly in capitated models like accountable care organizations. Mayo is actually pretty famous for adopting the ACO model. Note the author on this article (1) from Mayo Clinic Proceedings is by David Shulkin, who went on to become the Secretary of Veterans Affairs.
Further, let me present evidence that I believe indicates Google Health is going to be laser-focused on improving care and will actively shed any work that doesn't advance that goal. Not only their business actions, but also their leadership choices.
In terms of business actions, lets look at the NHS brew-haha. Hard to blame Google for NHS screwing up the research protocol that led to the specific 1.6M patient records being transferred to Deep Mind. And they corrected that research process years ago. Do you really think Google wants to be seen anywhere near the mishandling of private information? That presents an existential risk to their business. They passed muster with Mayo (a deal that no doubt had to pass muster with Shulkin among other world-renown physicians and administrators). They have deals with McKesson, Cleveland Clinic, and now Ascension. These are major players.
Finally, keep in mind that healthcare is widely regarded as one of the weakest points in Western cyber security. Bringing in grown-ups sounds like a phenomenal move to me.
Would it be nice if the company's core business wasn't adtech? I suppose. But for all the reasons above, I genuinely believe Google getting into this space is a better net outcome than the status quo.
I used to work at both Mayo and Google.
Mayo isn't shy about the need to protect their name and reputation. They consider it one of their most valuable assets, if not the most valuable. After all, they built it over a century of hard work. Sometimes it borders on excessive. If you went to a conference, unless you were presenting, it was suggested for you to just say you're "from a large medical institution in the Midwest". If you sold them software or services, you were allowed to use their name (trademark) in the list of customers on your website, but only if the list had at least 4-5 names and it was in alphabetical order.
The day Mayo signed, also in light of their past collaboration with IBM (I was around in the BlueGene days...), I knew that Google must have committed large amounts of money and resources to the partnership. For those who are not aware, Dr Plummer at Mayo pretty much came up with the modern idea of a medical record a century ago and even had a sort of human-powered Google for records, keeping them in the basement, calling them up over intercom and delivering them over tubes.
I imagine Alphabet’s end goal is some kind of intrinsic understanding of every single person on a biological level in order to better target and serve them ads that fit them.
It's the opposite; Google's seeing that their revenue stream is very near a monoculture and they know monocultures can't last forever. A lot of their initiatives (cloud, video, wearables) are trying hard to find an alternate revenue stream that will be as profitable as ads so that when the industry inevitably crashes (not because of anything specific they can predict, but because all industries eventually do) they're not powering their entire empire off of a single river that just dried up.
0) Google's only purpose is to make money.
1) Google's strongest strategy (by orders of magnitude) for making money is targeted advertising.
2) Google has surpassed the critical size threshold beyond which morality plays no role in business decisions (see surveillance state-supporting tech in China among other issues).
Due to these facts it appears inevitable that Google will seek in all of its large-scale actions to involve ad tech in whatever it produces or engages in. It's how they fulfill their purpose (selling the most sophisticated targeted advertising possible).
A naive approach would be to hyper-optimize on ad targeting to pursue rule 0, but Google does its best to not be naive. The company's ownership model is designed to target the long game, which is what they seem to tend to do in their decisionmaking much of the time.
That isn't to say that advertising doesn't factor in or that they definitely won't ever use biometrics for ad data, merely that "We could use this for ads" probably doesn't by itself justify buying Fitbit. But potentially uses in advertising and Assistant might.
Sure, they can morph into different mediums and appear in different contexts, but that doesn't make them "not ads."
Online ads specifically have enjoyed premium value over even "traditional" print and television ads because of highly-specific microtargeting; there's a combination of demographic data and ad targeting that an online service can use to offer expectations that you're paying specifically for the high-value slice of people you want to reach. It's an expectation that print and TV (being wide-cast, non-customized experiences) have difficulty offering.
Various phenomena (market regulation, the Facebook / Twitter data walled gardens, the proliferation of ad- and script-blockers) threaten the guarantees that online advertising has been able to make, and it's unclear that Google's high-cost-of-operation empire survives unscathed if money flows away from premium online ads (either by advertisers going to alternate channels or by advertisers saying "Your value-add claims are bunk and I'm not going to pay for them").
Not because they think that's the right course of action but rather because they'll get fired if they don't.
Cognitive dissonance overwhelms me.
Why would they be any less trustworthy than the hospitals or insurance companies who normally has this data?
My hope would be that the government can step in and make your emr (electronic medical record) something akin to your SSN. It is your data and shouldn't belong to the provider.
It would require an institutional body/government to standardize the EMR and centralize it, allow the individual to share their EMR key to whom they wish (Google, your company, etc) to help provide analytics to tailoring health care to the individual.
Wishful thinking, but I hope Google is able to make significant outcomes with the data to show it is possible.
> It is your data and shouldn't belong to the provider.
Agreed! What do you think of the concept of the "Copy/Paste Test"? The idea is a good EHR should allow you to copy/paste your entire medical history into an email in a non-fragile way. If it can't do that, it's not a good EHR. We think this one dimension encompasses a lot of sub dimensions of what goes into a well designed EHR system.
> It would require an institutional body/government to standardize the EMR and centralize it
We're thinking the opposite: decentralized, git backed, concatenative grammars. You eventually probably would indeed have one grammar rise to the top, but the idea is to allow anyone to view, suggest edits, and fork the collection of grammars. Here is the current collection of Pau Grammar files: https://github.com/treenotation/pau/tree/master/grams. Note: this isn't even v1 yet, but the core ideas are there.
On the surface, this seems to be a poor test. Are you using it as a proxy for a patient to be able to get their EHR records out or something else?
It's a proxy to test a lot of dimensions at once: not only how accessible the records are to the patient (facilitating care, particularly in acute settings), but also how well designed the grammar and schemas are. Well designed grammars and schemas should survive copy/pasting easily. Any errors should be quickly and readily identified with the potential for autocorrections.
VA created a very cool thing called Blue Button (https://www.va.gov/bluebutton/). It is a step toward passing the copy/paste test. Any veteran can download their complete medical history in a single file. The schema isn't there yet and parsing these things is a pain, but a step in the right direction.
I guess initially I would think of this as a round-trip requirement. In theory at least I should be able to download the entirety of my history (in an appropriate format); delete the record on the EHR; re-upload my history and end up with the identical EHR record mutatis mutandis.
Sticking "email" in there had me thinking your were focused on transmission.
I like this test! Yes, passing the copy/paste test should pass this test as well.
I agree 100% with the granular.
For "revocable, and non-transferable" I'm not sure how you would do that without making it not worth the trouble, but maybe I'm just not seeing something.
One thing I think solves a number of problems is what we call the "synthesize test". An EHR should be able to synthesize as many medical records as a researcher needs with 1 click. Then you could design software against lots of synthesized records, and move the compute of actual real patient records to the edge, on their devices. Maybe you could do something like that for the "revocable/non-transferable" thing. Patients submit their records to a machine with a ticking expiration, that does the training and emits the trained model and destroys the training data (assuming we can prevent leakage of sensitive info into the model).
What I am suggesting is that data/learning companies need to start thinking of this sort of data differently, that what you are negotiating is access to the data under certain terms not ownership of the data.
If the individuals retain ownership, we the company should not be able to say sell that access on to a third party (c.f. Cambridge Analytics), or use it in a different way than negotiated for. If the company does not live up to the terms negotiated, the individual should be able to revoke the permission. There are various ways to do this, one being something like you describe.
In most developed countries, they pull his ID out of his wallet and type it into the nearest computer. National healthcare has many advantages.
Please stop spamming HN to promote your project. We've all seen it and decided whether we are interested.
IRL I'm not an argumentative person, but I consider coding/posting on forums to be my equivalent of the gridiron, and I hope people don't take offense when I fight hard on the field. I'm just hoping we can all get to the correct answers the quickest, and so if people think I'm wrong, please provide data or detailed explanations as to why, or else be prepared to take some heat.
Let's go. Put your money where you mouth is. Here's my bet: http://longbets.org/793/. Let's see your argument about why YAML is better. YAML is full of unnecessary complexity. We have immense research to back that up. You want to argue do some actual work.
> with little or no apparent work done toward buy-in from the enormous ecosystem of medical provider
Let's see, the past 4 weeks we had meetings with 3 people from NIH, 2 from OHDSI, talked with 5 different healthtech startups, had meetings with researchers from 3 different top tier institutions about collaborating for R1 Feb cycle, and had meetings with researchers from 4 different countries, was on a panel on medical tourism speaking about the importance of portable medical records....you were saying? You might not want to make assumptions because you make yourself look like an ass.
You say you have had meetings, yet that's not actually buy-in. People will take your idea more seriously when you have actual adoption.
Agreed. What is does say is I have a lot of confidence in the technical merits. If someone spots a flaw in the technical merits, I'm all ears (i.e. prove why 2-D or 3-D languages are inferior to 1-D languages). If someone can show me a data structure that can be more efficiently (fewer parts) represented by a 1-D language—that would be great and a great argument against 2-D languages. But that hasn't happened yet, which makes me more confident in the technical merits.
> Your project necessitates re-implementing parsers, viewers, and all the other tooling that goes into a new grammar;
It's a lot of work and investment, I know. I'm funding a lot of it myself. But the potential rewards for the world are vast. Sometimes you have to do things that are hard. But the bet is that because this is simpler, the effects will compound. And the evidence so far is pointing to that. For example, the Grammar Tree Language now gives you parsers, highlighters, type checkers, visualizers, synthesizers, go to definition(new this week--thanks ZK!) etc, for a new Tree Language, in very few lines of code relative to existing 1-D languages. If you agree with the statement "software is eating the world", and I'm correct that 2/3 dimensional languages are a better type of software, then I think this could create far north of $1 Trillion worth of value annually for the world. More importantly to me it can help revolutionize medical records and then both medical research and healthcare delivery in the process.
> You say you have had meetings, yet that's not actually buy-in.
I agree! I wish I had more buy-in but it's still very early and we are working on it. The parent comment said "with little or no apparent work done toward buy-in" (emphasis added). We are indeed working toward buy-in.
Yes. If not from Tree Languages, from someone else's novel non 1-D languages. This isn't some shooting from the hip either, btw. We've built the world's largest database of programming languages and notations on the planet (over 10k languages, over 1k columns), so we can forecast and simulate the future.
I mean granted most of the other tech giants aren't far off from them. But that just means none of them should get a pass on that crap.
Sure Google has mostly benign reasons for wanting this data, but putting it in a big warehouse means it's a juicy target for anyone who wants to oppress you for any reason. There's a reason the secret police in the USSR had records about everyone. Because the only way to really effectively guarantee compliance is to ensure that you know exactly what the subject is doing at all hours of the day, even when they think they're in private.
The shitty thing about this is there's effectively no way to opt out. Just being alive in this time is enough for these companies to gather an unbelievably creepy amount of information about you.
It's not "surveillance capitalism" to collect and then publish a pile of damaging data on a person; it's just sloppy, and makes people less inclined to do business with the company that screwed up (on average).
I trust Google specifically with my data because Google has some real good incentives to keep it private.
Fascism is fascism, and I find it interesting (in a cognitive-dissonance sense) how many of my European peers are leery of data collection by a Google but are extremely comfortable with their respective governments running universal healthcare systems. Aren't those governments one fascist dictator away from all that centralized government health data being used to drive a mass extermination? It sounds a lot like people's real disagreement is on choice of master of the data, not whether data aggregation has utility to them.