The team is joining DeepMind/Alphabet, and it appears DeepMind promised that health data would never be joined to your Google account.
And there's zero evidence that they are joined or that they would be joined. As far as I can tell, fearmongering like this would suggest that Alphabet couldn't ever deal with health care info, which feels silly... There are plenty of very strong laws enforcing health care privacy (e.g. HIPAA in the US), so this really just feels like FUD.
> [data] "will never be linked or associated with Google accounts, products or services"
Streams is plainly a Google product and is probably using deep learning models trained on this data. Morever, patients were never given the option to opt-out of this data collection at all.
What is trained on what data? The point is that health care data will never be linked to your "Google account", e.g. your Gmail/YouTube/etc. or other Google services -- AdSense can't show you ads based on a hospital visit.
DeepMind is part of Alphabet, like Google is part of Alphabet. DeepMind can do its thing without ever touching Google accounts or Google data.
>> is [...] a Google product
> I can't tell what you're trying to say...
Surely it's clear...said it wouldn't be associated w/ Google products and became a Google product. You can bring up nuance or twist words or whatever, but it's clear that at the least they misled.
was true at the time but surely what people care about is data flowing between the medical app and existing Google services? i.e. what the wording could be revised and still keep to the original intended meaning.
Whether you think the company separation was critical to trust is a rather different question to whether they have broken the intent of their original promise. If you don't trust Google then you presumably also had suspicions about the previous arrangement.
> was true at the time but surely what people care about is data flowing between the medical app and existing Google services?
I would think people would care about the data flowing to any Google service. An understanding that it only applied to existing services like you're claiming would be meaningless. A hyperbolic example of this is Google/Deepmind would still be compliant if they came out with a brand new service, Google Privacy Destroyer 1.0, that puts all these people's private health data up for sale on the dark web.
The statement unfortunately has a secondary implied promise which Google has broken. But my point is that if we don't trust Google to keep the data separate then the initial promise was meaningless. And if we do then not much has changed.
If Google wanted to do bad things then they would have kept the companies separate to keep their promise superficially.
The wording was poorly thought out but unless you think the reorganisation is going to materially change Google's behaviour then I don't see it as hugely significant.
So the new promise is "Patient data remains under our NHS partners' strict control, and all decisions about its use will continue to lie with them." Except I can think of at least two or three good reasons off the top of my head why Google could renege on that without breaking the spirit of their promise too. Maybe Google decides that the information will be more secure on their servers, and the original point was to make it more secure anyway, right?
Yes, Google's original promise was kinda arbitrary and over-strict. But the way you build trust is you keep promises, even if they're arbitrary. When Google says they're not going to do something, they should not do it. Otherwise, nobody will trust them in the future when they say they're not going to do other things. Nobody wants to live in a world where we have to second guess what the intent is behind everything a company promises and are only able to hold them to that.
If Google can't keep its promises, then it shouldn't have made any promises in the first place. It should have just given a milk-toast answer like, "we care about privacy", which would have communicated exactly the same information without deceiving customers into thinking that there was an actual line in the sand somewhere.
How is this not the world we already live in?
Every agreement, obligation, promise, or contract I've read from my major tech 'partners' professionally and personally has had a clause that they can change it at any time with no prior notification.
That being said, at least they include the clause. If this promise had been phrased as, "we won't link with Google products, but we might in the future", I would honestly feel a bit more charitable towards them. It wasn't.
DeepMind went out of its way to describe this not just as a temporary promise, but as a legal one. In their words, "...data will never be connected to Google accounts or services, or used for any commercial purposes like advertising or insurance. Doing so would be completely impossible under our NHS contracts and the law".
That is a full step beyond the crappy TOS that most companies use, and statements like that were given to ease consumer concerns and make it easier for DeepMind to form deep relationships with hospitals.
One other thing to note outside of legal obligations or technicalities is that even if you don't think Google did anything different from the companies you're talking about, companies who do 180s on their TOS still lose trust. The trust loss is separate from the legality.
I don't know if you can go to Google and say, "legally, you shouldn't be allowed to link your product," I'm not a lawyer. I do know that we as consumers and engineers should instinctively distrust any future promises that Google makes about privacy.
If what they're doing is legal, I'm still going to call them out for being untrustworthy and dishonest. Their earlier statements are still lies.
May be it is the case at present. But I see zero probability that it will always be the case.
Assuming hyperbole: why do you believe that, over time, there is a low probability "DeepMind can do its thing without ever touching Google accounts or Google data?" What factors will contribute to that?
it doesn't say your google account or product. it is in general google products.
It seems to me that they are breaking this promise.
It's not like it was an existing Google product that the data is being moved to.
If Google Health was part of Alphabet but not Google, would you argue it was "plainly not a Google product"?
Or is that still a Google product?
(FWIW: I actually expect the article gets it wrong and that Google Health will be/is a separate LLC under Alphabet but not Google)
So now it's a Google product. And when they said [data] "will never be linked or associated with Google accounts, products or services", that was a lie. Seems very straightforward to me.
> If Google Health was part of Alphabet but not Google, would you argue it was "plainly not a Google product"?
It has Google in the name. I don't understand this claim. It's a Google product. From the official blog post that Streams released yesterday:
"We’re excited to announce that the team behind Streams—our mobile app that supports doctors and nurses to deliver faster, better care to patients—will be joining Google."
They're not joining Alphabet; they're joining Google. I mean, I guess linking to Google is technically different than literally being consumed by Google?
Just because something has Google in the name does not mean it is part of the same legal entity as the rest of Google.
This is already the case in various situations.
Your view seems to be "branding" is what makes it a Google product, and that seems wrong. Branding has no effect on anything material at all.
If Google called it "Alphabet Chrome" but it reported into the Google entity, would it suddenly remove privacy concerns around chrome? As I asked, and you never answered, would you suddenly no longer call it a Google product?
What really matters is what entities it is part of and what walls/agreements/etc exist between the entities, etc.
They're joining Google. Representatives from DeepMind have said that they're joining Google. Every source I can find says that they're joining Google. You're talking about a hypothetical that doesn't exist.
If Streams was joining Alphabet instead of Google, that would probably change things a bit. If they announced they were joining Mozilla instead of Google, that would change things even more. But based on all of the information we have now from both Google and Streams, neither of those things are happening.
Is your position that the blog post that Streams itself used to announce this about their own acquisition is wrong?
That seems implausible but would make sense.
I wouldn't see any advantage to making it part of the same legal entity, and a lot of disadvantages.
Google does this in plenty of cases, actually.
Search for Google. A subset of the results are Google entities
That's pretty misleading. I wonder if the news leaked before a name was picked?
I don't get how people still try to defend Google. When you see what Google is doing with our data, they should be prohibited from accessing any health information on us. Google has just proven themselves to be one of the most data-angry and unethical company of the moment.
And how does that apply in this context? Are you conflating Google and their parent: Alphabet?
Also, comparing Google to Big Brother is disingenuous at best. Google isn't trying to control or limit what people think. They are merely trying to allow advertisers to target audiences precisely.
I hate advertising, for sure. But the anti-Google fear-mongering and hate based on nothing substantial and leveraging an inaccurate view of what Google does is harmful for everyone involved.
Alphabet isn't the result of a merger, or Google getting bought out by a different company. It's largely the same people and culture that Google was and is, just split out into a larger parent organization so they can oversee more products.
If I change my name to Eric, that doesn't make me a different person.
I think proactive censorship for authoritarian governments (e.g., China) falls into this category.
Google is missing out on the soon-to-be biggest ad market of the world - at least partially due to ethical standards. Apple does earn a significant part of its revenues in China and has no problems with working with the govt there.
(I am not affiliated in any way with either company - just don't understand all the hate against Google)
So the relevant product groups and leadership are, at least to some degree, implicated in deciding to go along with rules they knew would be morally objectionable from the start.
That said, Chinese folks are imo quite right to cry "hypocrisy" on the degree of moral outrage US citizens level at Western nations given how many ways western nations police dissidents and stifle anti-government speech.
If not, the clearly the principle isn't so solid after all. But I don't know what to replace it with.
(I've removed contextual text that I think is irrelevant for this discussion.)
... Civil disobedience is the refusal to obey laws, not the refusal to uphold them. People participating in civil disobedience do it with the understanding that they can (and should) be prosecuted for such. They do so to act as martyrs.
The bottom line is, you need to be really, really careful when you start arguing for "ethics" and "morality" as a basis for execution of law. For instance, to make a concrete example: It could be argued that based on the ethics and morality of the Nazis, that the mass murders committed under the Holocaust were in fact them morally disobeying those supra-national human rights laws. Who are you to say that the Nazi morality is wrong? You can't point to the agreed-upon supra-national human rights laws, because you are in fact arguing that law should be violated based on morality!
In fact, one of the ways to view law is as an encoding of the morality of the society it covers. Sometimes laws, being fixed entities, and society, being ever changing, drift apart over time. Same as software drifts from the requirements of business if not kept up to date. It usually takes an example like this German one to point out the absurdity, and if the law really is no longer part of the society's morality, becomes fairly easy for lawmakers to fix. (As a reminder, this law being invoked is very old -- from when Germany was a monarchy and insulting dictator kings was morally a very serious crime!)
That is a valid conflation. Google became Alphabet in 2015. It began in 1998. The vast majority of its existence and brand persona is as Google.
The Web Spam team at Google is corrupt, such that a single person can affect rankings in ways that destroy or legitimize websites. And they do fully take advantage of this power.
It’s literally impossible to say that Google doesn’t control or limit what people think, given that there’s zero transparency and zero accountability.
(It’s too late for me to edit my comment.)
Advertisement is precisely trying to control what people think. Specifically, influence their economic and political decisions.
Google's efforts against "fake news" are exactly trying to do that.
And just because one agrees with them in this one instance doesn't change the precedent.
> anything and everything that google does or doesn’t do would be considered an effort to change beliefs
Absolutely. Google (like so many other organizations) is making efforts to change beliefs all the time. That's not a problem per se, except they have so much leverage to _succeed_ at it that it might be worth thinking about the implications a bit. And statements that they don't do it don't help with that step.
Did you mean proliferation? (just trying to avoid falsehoods on HN)
But I also understand why news media in a totalitarian regime toe the party line, say. And they are also damned if they do, damned if they don't.
Again, I am not claiming that Google should be doing something different. They may well not have any other options. I'm claiming that we should be aware of what they (and various other companies and governments!) are doing and act accordingly. If someone claimed that the US government "isn't trying to control or limit what people think", say, that would probably be met with a healthy dose of skepticism. Even more so for the Chinese government. My argument is that such claims about Google should also be treated sceptically. Not as sceptically as the Chinese government; hard to compare tothe US government.
Do you see the juxtaposition here?
Honestly, I don't think society as a whole has really come to grips with what advertising is or how to treat it ethically. Informing consumers has never really been what it's used for, even though that's a critical function of any proposed society that relies on open markets.
Wow, really? I must be crazy but I still honestly feel and believe Google is a good company following "don't be evil".
I think that would be a good start.
> Streams began as a collaboration with the Royal Free Hospital in London to assist in the management of acute kidney injury.
> However, it emerged that neither the health trust nor DeepMind had informed patients about the vast amount of data it had been using.
> DeepMind Health went on to work with Moorfields Eye Hospital, with machine-learning algorithms scouring images of eyes for signs of conditions such as macular degeneration.
> In July 2017, the UK's Information Commissioner ruled the UK hospital trust involved in the initial Streams trial had broken UK privacy law for failing to tell patients about the way their data was being used.
Still not super clear what happened but it appears that google simply failed to tell patients about how it was using their data, which seems to be a violation of UK privacy law.
I'm sure to some people this does not seem super concerning, and to others seems very concerning. In my opinion this is a mistake that should just not be made. The data privacy cultures in tech and healthcare are so incredibly different. Personal healthcare data is regulated arguably too heavily, while in tech it seems that privacy doesn't exist. It seems quite irresponsible to me that such a high profile company as google, that already has a reputation for lax privacy practices, would mess up such a seemingly simple compliance issue. It suggests they simply either dont care, are negligent, or think they can get away with whatever they want
That said, the reporting on this seems to be egregiously sensationalist and fearmongering, and potentially factually incorrect. My initial reaction to that kind of language is to recoil and it makes me side with google rather than the reporter, although upon further consideration it seems clear to me that google / both google and the journalist are in the wrong. But i am probably in the minority and other people may respond to this kind of journalism. If it takes this level of aggressive journalism to raise public awareness of privacy concerns in healthcare, then maybe i support it...
>Streams...hit headlines for gathering data on 1.6 million patients without informing them.
Ah, good to hear! Companies always keep their promises after all, especially when it is around user-data sharing and it's all left to good faith instead of a written, actionable contract. /s
Throwback to Facebook and WhatsApp anyone?
I'm not claiming one is better than the other and that's not a discussion to have today, but I agree with him 100%. Google, after all, invented the business does have a hell of a PR team.
> "DeepMind repeatedly, unconditionally promised to 'never connect people's intimate, identifiable health data to Google'. Now it's announced... exactly that. This isn't transparency, it's trust demolition," [Lawyer and privacy expert Julia Powles] added.
I remember this from when DeepMind was first given access to UK patient data. The firewall between them and Google proper was a major point at the time.
The article is scant on details but while this move might not "demolish" trust it does seem to erode it.
But it didn't. It's just another PR thing Google did to get people off their backs while they continue with their original plans for AI in advertising/user tracking.
Is privacy issues around data sufficiently "ai" to be part of this.
I mean, I can easily picture a board of hyper-intelligent academic types who are only interested in skynet or brain-in-a-vat situations.
> DeepMind has consistently refused to say who is on the board, what it discusses, or publicly confirm whether or not it has even officially met.
EDIT: now I'm being down voted? Could someone explain?
Are we allowed to share your data in anonymized form for research ?
I never thought much of that and, of course, always answered Yes.
Nowadays, and after the shit that Facebook and Google are tyring to pull off my answer is a surrounding:
Do those guys actually consider how much they hurt science and by extension patients?
firstly they sold everyone to a number of insurance companies. Something that as far as I can tell is illegal https://www.telegraph.co.uk/news/health/news/10656893/Hospit...
secondly NHS digital created a schema for sharing records with researcher. Supposedly it was anonymous, however it had date of birth, sex and postcode (a post code is about 80 houses) plus the address of every interaction with the NHS.
The only thing that was missing was the name. But cross referencing date of birth, gender and postcode with the electoral register give you a name in 99% of cases.
Also, knowing a few of the people who work _for_ NHS digital, and their involvement of leaking things to the press for personal gain, I have no faith in their moral compass.
As I'm fond of saying, there ain't such thing as "anonymized", there's only "anonymized until combined with other data sets".
Plenty of non-obvious things can deanonymize you. An accurate enough timestamp. A rare enough medication or treatment you received. The combination of treatments you received. It's all fine until a chain of data sets form that can identify you with high probability.
Unrelated, I too used to be all for "share my data with whoever you need for medical research". These days, I worry that "medical research" doesn't mean actual research, but random startups getting deals with hospitals - startups that I don't trust to not play fast and loose with the data, and don't trust to share the results with wider community. I think there was even a HN story about that some time ago.
Frankly, it's a terrible list made by people who don't understand statistics. 16 of the fields are simply unique identifiers in isolation. The only two sops offered to prevent deanonymization are dates, which are restricted to year, and location, which is restricted to a >20,000 person zipcode identifier.
A complete medical history reduced to year is probably still a unique identifier for many people. Crossed with a 20,000 person geographic restriction, the year of even a single uncommon medical event is unique for many people. And that's before we even include non-redacted information like demographics. Adding just race, gender, and approximate age can easily turn 20,000 people into a few hundred.
Who can deanonymize that data? Well, Visa can see when and where you're diagnosed with things by spotting a radiologist's bill or a monthly pharmacy payment. Target can use your location and OTC medical supply purchases. Plenty of ad networks could pair an IP location and search term to a ZIP and diagnosis. And that's what I've got with 2 minutes thought and no training in doing this.
As a final, ugly aside: HIPAA coverage applies to patient-released data. Once it's anonymized, shared, and deanonymized, the new holder is likely free to shop it around with your name attached.
Those are examples of PII, not "anonymous"
And that's the point. Anything can add up to PII if you have enough clues. PII is a statistical concept, not a binary concept.
its anything that can be reasonably used to identify a person.
But when combined, and matched against the kind of individual detailed data that Google or Facebook have about most of the population, it's a lot less anonymous.
In the new GDPR, as far as I'm aware, its the _act_ of trying to combine/process (not sure of the exact wording) to de-anonymise
This bar moves over time, especially as people share more (not health-related) personal information with companies.
HIPAA permits two de-idenficiation methods. One, expert analysis, will presumably catch things like demographic identifiers. The other, safe harbor, offers a concrete list of data to remove - and that list leaves an absolutely massive amount of PII unrestricted.
The problem is that it's very hard to really anonymize data and the hospital may not necessarily know that.
However, with Google, Facebook and (yes, even) Apple getting into the game my trust is ever more shattered. Let alone any shlocky "health-app" maker which sends your most personal data straight into "the cloud".
Definitely thought needs to be put in, but I don't think "very hard" is right.
That goes or everything, and that's the problem with the PII concept.
Passwords are (if handled well) unique secret keys. Either you know it or you don't. Close doesn't count.
Personal identity is an amalgamation of dozens of personal facts, and each fact statistically deanonymizes a person.
The word "anonymity" is a little ambigious. It can mean "unable to be identified" or it can mean "namelessness". There's not much information about anyone that is truly unable to be identified, especially when combined with additional pieces of data readily available from external sources.
It all depends on who they're sharing the anonymous data with and to what external data do those partners have access.
All medical research need patient data and every newly admitted medicine has to withstand rigourous (data-driven) testing and this did not change with new technology at all.
Only 2 things changed:
1 - Nowadays with GDPR and increased alertness on data protection (which is a good thing in general as the possibilites evolved more quickly than the regulation), people are explicitly asked about these things.
2 - The big internet companies like Google/Facebook (which did not behave well in the past and are thus partially responsible themselves for the public mistrust in these cases) have most experience/talent for the development of ML/AI-based technologies and are deploying this in new areas like medicine.
I don't get all the negativeness here. I vastly prefer Google/Facebook working on medical progress than working on improving ad targeting. Imo they should be encouraged to use their expertise and resources for progress in these fields.
IF they are indeed found guilty of breaking promises and harvesting highly-sensitive data, shame on them. But this is not the case here.
Google's primary mission if surgically precise ad targeting (pun intended). Facebook has the same mission, that's their primary revenue stream, the difference is where their user data comes from. Any effort they do is somehow/has potential to improve their primary mission. This is just another brick in their wall.
This is how (a bit educated part of) public sees G/F. Maybe the image is not 100% precise, but they do very little to correct it and be the properly honest good guys.
...but how do we do it without also transforming into a distopia of "radical transparency"?
With the Apple Watch, Apple partnered with Stanford for a heart study to identity a-fib using smartwatch-sourced heart rate data. They attracted 400,000 participants. 
I would guess that exactly zero of these participants feel cheated by their participation in this study (so far). Why? It's the same general concept as this DeepMind stuff. I'd argue its three things: clear, precise, & opt-in consent of the data shared, clear & constrained explanation of its usage, and trust in the receiving parties.
Apple and Stanford have these three things in spades. Google is incapable of all of them, and you can't Design or Buy your way through corporate culture and customer trust. They gather and correlate so much data, their consent process is far from precise or opt-in. Once they have the data, they have a history of using it for Whatever they want. Which all ties back to trust; Google has permanently ruined any trust customers would have toward them for things that Actually Matter.
Point being, other companies can pick up this mantle. It won't be Google. And if you're a health-focused company, joining Google is a literal death sentence for your product. You'll end up prototyping something amazing, then be completely incapable of deploying it for any public good because no one will give you data.
A company can sell it to advertisers. A government can use it to run a genocide. An individual can use it for blackmail. Who's left?
All of this applies to companies. Morality comes from shared culture and values. Laws like GDPR help. Reputation is a huge one; if companies want any sort of meaningful enterprise contracts, especially with PII/PHI, data privacy and security is paramount.
Eventually, no matter how virtuous a company claims to be, they'll sell the company to FAANG who I try to avoid as much as possible. When the true end goal of most startups is an exit payday to Google I can't trust them with my data.
It's frustrating that avoiding Google isn't even as simple as "Don't use Google". It's become "Don't use anyone who might be an acquisition target for Google".
And their appealing product will probably get shut down or rendered unrecognizable, either by FAANG or by bankruptcy. I've pretty much sworn off all startups that sell services, products dependent on services, or anything that requires "the cloud."
At some point, my data's going to be sold off and monetized, and I won't even be able to enjoy the product I was supposed to get in return.
The biggest problem with these big faceless companies is the lack of control and that you can't trust them to do what they say they will.
For example the dark patterns that Google puts around location data controls. You turn it off in one place but unfortunately that's the wrong place so they're still collecting that data.
They say that Nest data will never be combined with other Google data ...until they quietly change their terms and do just that.
Privacy law is like tax law to these firms, something to look for loopholes in.
They published a pseudonymous list of (User, movie, rating), triplets and two lists of (User, movie) tuples. The idea was that people could train a model to predict the ratings that correspond to the two lists of (User, movie) tuples.
You could hand in a guess every day, and would get back data a score on the first list of (User, movie) ratings. In the end, whoever got the best score on the second list got $1,000,000.
It was a really interesting challenge, and some good research came of it, but it turned out that it was pretty easy to de-anonymize users by correlating to e.g. IMDB watch lists.
It took just two years to go from rejecting the FBI's request for one user's keys, to handing over the encryption keys to China for millions of users.
Apple was asked by China to move the user data and it's keystore to local Guizhou-Cloud data centers. Apple updated their TOS and blocked service to anyone who didn't agree to moving their user data. 
These datacenters were nationalized just several months ago :
> Fast forward to today: China Telecom, a government owned telco, is taking over the iCloud data from Guizhou-Cloud Big Data. This essentially means that a state-owned firm now has access to all the iCloud data China-based users store, such as photos, notes, emails, and text messages.
I suspect someone would have to show that the model trained on their data revealed something about them in a practically harmful way.
I guess it still needs to be litigated, but the question on my mind is: Does that right of refusal only apply to the model, or also the data that trained it. If it applies to the data, the regulation is pretty useless, since anyone could avoid the deletion requirements by training models on it, if it doesn't I think the use in the model takes care of itself. At some point they'll need to retrain, and then you're data won't be there.
I feel though that the point of the GDPR was to protect our personal data held by companoes, not to prevent companies using our personal data to make money.
So if a company uses your personal data to train a model (lets assume you willingly gave your informed consent for the time being), and then they delete your data after they have trained their model, does that model contain your personally identifiable inbformation? I'd argue that it does not - the model is just some weights, right? So 0.6 34.291, 0.0016 - is that you, mum?
.... but having just said that, I do wonder what happens if you run the model in reverse, like the deepdream stuff did (1). Could it re-generate PII (or rather generate "nearly-PII") purely from those weights?
1 - https://en.wikipedia.org/wiki/DeepDream
This means that, after Brexit, the GDPR implementation laws will still be law in the UK. Depending on the outcome of the Brexit negotiations, the UK might or might not be in a position to repeal those laws at their own discretion.
Directives are the ones that only direct the states to enact laws implementing them.
I’m a Remainer, but let’s concentrate on being alarmed about the real impacts.
The battle now is, what form does the divorce take, messy and sharp (no deal, where the UK has no access to anything without lots of barriers) or really painful, where we loose access to lots of things, but retain access to a few key things.
There is a very remote chance, very remote, that a referendum will take place, but unless article 50 is rescinded it will be meaningless, as we will leave the EU automatically at the end of march.
As of today it's looking more and more likely that the separation will be in name more than in function though.
its worse than that. Its what nobody wants: tied to some parts of the EU, with no say whatso ever. In the case of the finance industry, it'll be 60 days notice to comply or access is withdrawn. Brilliant for the EU as it means that it can start creaming off the finance industry and the vast sums of money it generates in tax.
There are no limits to greed and that's why there are serious constraints on all sorts of things that can damage the commons and why the only legitimate force is the common good.
And a lesson for the tech community who have seen first hand the rapid transformation of seemingly well meaning ethical actors into self obsessed exploitative bad actors completely divorced from ethics.
Nah screw that empire. 29 Million killed in India and 1 million killed in Ireland.
> DeepMind Health went on to work with Moorfields Eye Hospital, with machine-learning algorithms scouring images of eyes for signs of conditions such as macular degeneration
Sorry, but "scouring images of eyes for signs of conditions" on a scale of single hospital is a task for two CS graduates, easily accomplished with freely available machine learning tools. The hospital in question could have done that themselves at minuscule cost. Are UK hospitals legally prohibited from hiring non-medical staff or something? Instead they are partnering (conspiring) with international companies to... do what again? Write Android apps and feed images to neural networks? In exchange for their entire medical data??
Is UK becoming another India or something?
They are not bound by HIPAA is its the UK.
In China the tech/data hegemony is part of the central government while in the West it's separate, FAANG et. al. are expected to keep a distance from the government and vice versa.
I was trying to imagine a scenario for a science-fiction story set around 2040, and my brain conjured an image of the people of China chipped and managed by computer... It was chilling. As for the West, I imagine we're going to have to nationalize the data and infrastructure of the tech companies OR acknowledge them as the new technocratic form of government. Either that or bifurcate into Morlocks and Eloi, in which case it doesn't matter what form the control system takes.
I guess what I'm asking is, which system do you think will be stable in the long term ("long" meaning 20 to 70 years, the time it takes for the weather to get really hard to ignore), and why? Or will something else happen?
 "How ZTE helps Venezuela create China-style social control"
"A new Venezuelan ID, created with China's ZTE, tracks citizen behavior" (reuters.com)
Is it just playing the odds "oh this and this are probabbly this or this or this."?
What do you imagine a doctor does? They use their education and experience to make an educated guess as to a course of testing/treatment.
ML models are developed under the supervision of doctors (often leaders in their field) and engineers and are validated against large/statistically significant cohorts.
Source: Spent five years working for a company which released an IF/ML prognostic model for late stage prostate cancer.
I was just curious what the end game was with it.
I am a bit skeptical of just playing the averages but no more / less than individual doctors.
That would be the minimum standard, but you gain efficiency and lower costs (well, theoretically... companies throw crap in just to hit a higher reimbursement tier.) The models often do better than doctors (where applicable) because often times doctors won't agree with each other. Like in any profession, you have varying levels of competence. My work has always been in the realm of pathology, so I don't know much about other areas. In pathology you will often see five different doctors give five different interpretations when looking at the same sample.
Once upon a time childhood cancer was a death sentence. Cancer in young kids moves very fast. Doctors in the 50s/60s did their best but studied the problem as individuals, writing and presenting papers based on patients at their hospitals. But nobody had enough data to discover the incremental improvements. Then docs started getting together and adding patients from multiple hospitals into larger and larger studies. From this larger pool of patients came trends and treatment advice that, today, means childhood cancer is largely survivable. (It is still horrible, but today many childhood cancers are very treatable.) That movement required patient data leave the hands of their individual doctors. Today EVERY kid with cancer is part of multiple studies and it is normal for their information to be shared far and wide. AI may be the next great thing, but it needs data. It may be necessary for patients to again give up a little privacy to enable progress.
I would never in a million years let a company like Google or Facebook have that kind of info on me.
The master contract for your insurance company almost certainly allows for information that you provide for a variety of business purposes. Insurance companies pool risk data, and it seems unusual that any negative event would not be shared if captured via this mechanism. Another key thing is to look at the precise wording around "We do not sell your data". That is a weasel-wording that usually means "We will rent your data" or "We will provide your data at no cost to our business partners, for our business purposes".
Businesses are for profit. They'll find ways to monetise the data. Wording 'the data' can be bypassed by modifying it, like extracting location heat maps, and selling it instead of raw data. Or/and they may create a service that uses the data and sell it instead.
Not that I know different, I'm just surprised to see so definite a statement.
I do have a problem with insurance organizations and pharmacies abusing data sharing agreements intended for subrogation and similar procedures to manage pharmaceutical sales quotas and conduct outbound marketing.
Case in point: My wife was admitted to the hospital due to complications that from what was an early miscarriage. The health insurer sells data that allows an advertiser to surmise that there was a hospital admission to the OB department. The PBM provides anyone paying with information regarding prescriptions before my insurer even gets the claim.
Outcome: An advertiser (infant formula company) determines that my wife is likely pregnant and likely to deliver on Month/Day/Year. Guess what arrives on that day? A Fedex care package of formula.
That was a very hurtful event for us, and similar violations happen thousands of times every day.
In that case it had nothing to do with mining medical records for advertising purposes. The daughter’s browsing and shopping habits sent a strong enough signal to trigger the ad targeting.
I don’t know anything about your case, and am very sorry to hear about your family’s loss. I don’t know if you can draw a line to the insurance company selling you out. But now I’m very curious to learn more about what data insurance companies are allowed to sell, and to whom.
I know the insurance companies, hospital and PBM sold pieces of the data because the formula company immediately upon request disclosed the list that they obtained my name from, and identified who had the relevant information by process of elimination. I don't know specifically all of the ways this is done.
Basically, claims data is sold, but not diagnosis. There is other context (type of admission, source of claim) that can identify the reason with confidence. (ie. ER admission, hospital admission, claim from OB/gyn) The prescription, can strengthen the assumed condition, and your pharmacy provides that data in near real-time to pharmaceutical companies, brokers and others. That script is tied to the DEA number of the doctor and can be cross-referenced to the admission.
The formula company takes that data and mashes against people who have used their coupons in the past.
It's infuriating that companies think they can insert themselves in people's lives like that. Why not keep it strictly at "they sell something" and "we will go and look for that something if we need it"? Why try to squeeze themselves into people's faces like that?
Of course. But companies like Google, Facebook, Amazon, Apple (yes, I'm including Apple), and others should be a mile from that data. Google and Facebook particularly are ill equipped, from a moral and focus perspective, to have anything to do with this industry.
It's important to note that these studies are subject to pretty rigorous review by IRBs (https://en.wikipedia.org/wiki/Institutional_review_board).