
DeepMind readies first commercial product - s1512783
https://www.ft.com/content/0e099914-514a-11e9-9c76-bf4a0ce37d49
======
neonate
[https://outline.com/EpGqWm](https://outline.com/EpGqWm)

~~~
dclusin
[https://github.com/iamadamdev/bypass-paywalls-
chrome](https://github.com/iamadamdev/bypass-paywalls-chrome)

No need to rely on third party service.

~~~
r3bl
[https://github.com/iamadamdev/bypass-paywalls-
firefox](https://github.com/iamadamdev/bypass-paywalls-firefox)

No need to rely on Chrome.

~~~
sverhagen
Do you know why it's not added to the official directory of Firefox Add-ons?

~~~
sah2ed
Dude, don’t leave everyone hanging, spill the beans!

~~~
sverhagen
Me? It wasn't rhetorical, I really don't know. But I installed it from GitHub.
When I first tried that on my phone, I mistook the popup in question for my
device saying it wouldn't do it for security reasons, which made me think I
should get it from the Add-Ons directory instead (but couldn't find).

------
dr1337
The cost blowout in healthcare isn't due to diagnosis or care but
administrative costs. All these diagnostic AIs won't significantly reduce the
cost of healthcare nor improve quality. What we really need is automated
administration and billings to really move the needle and that's more human
bottlenecked than technology.

~~~
bufferoverflow
Another high cost of healthcare source is the lack of competition on price.
You have zero idea how much each procedure costs. Imagine going to a
supermarket where there are no price tags, and you must buy the items once you
pick them off the shelf. It's madness.

~~~
cure
It's worse.

If you have insurance, the price tag will be X, and they will pay for some
part (most?) of it.

If you don't have insurance, the price tag can easily be 100X and it's your
responsibility.

If you have insurance, but you happened to pick up that can of pepsi from the
left aisle (out of network) instead of the right aisle (in network), they
won't cover it, and you're stuck with the 100X price. You won't know that
before you buy the can of pepsi, of course.

If you try to ask your insurance if they cover the specific can of pepsi from
the left aisle, they may or may not tell you. They may or may not give you the
right answer, and if they tell you it's covered but then refuse to cover it,
it's your problem (and again, the price will be 100X).

~~~
zaroth
It should be illegal to charge two different people two different prices for
the same procedure at the same facility.

The whole model of negotiated rates, rebates, etc. needs to go.

I also think the concept of networks is bizarre on the face of it. A certified
medical provider should be covered to perform procedures in their area of
expertise. Period.

Personally I like the idea of anyone who wants to be able to buy into Medicare
A & B. And if you don’t have Medicare then you can always pay the Medicare
rate of the procedure at 100% (versus having Medicare where your copay is
20%).

If insurance companies can’t compete with that then great.

~~~
dmix
So the solution is more layers of administration over the prices of the pseudo
marketplace?

~~~
zaroth
How is that a fair representation of what I proposed?

I think part of the problem is absolutely opaque, discriminatory, and
_predatory_ pricing.

I went to get a basic blood count last month. I gave the lab my insurance
card, but they must have copied a number wrong, because when I got the bill
they had me down as self insured, and had a bill which said;

    
    
      Lab Services     :  $1,541.00
      Patient Adjust   : -$  385.25
      Total Due        :  $1,155.75
    

I called back and gave them my insurance card and they said ignore the bill a
new one will come in the mail.

Last week I got the new bill:

    
    
      Lab Services     :  $   17.12
      Insurance Pay    :  $   12.12
      Total Due        :  $    5.00
    

Yes, I absolutely think it should be illegal to try to bilk a cash carrying
customer over $1,000 for a $17 blood count test.

------
melling
AI is being developed to make MRI scans 10x faster.

[https://www.forbes.com/sites/samshead/2018/08/20/facebook-
ai...](https://www.forbes.com/sites/samshead/2018/08/20/facebook-aims-to-make-
mri-scans-10x-faster-with-nyu/#5a5f7fd87a04)

If DeepMind is trained on millions of MRI’s, we might have better preventative
medicine.

~~~
dontreact
I have talked at length with several people in the FastMRI project and in my
opinion it is actually very dangerous. There is no feasible way to validate
that such a model will not hallucinate normal tissue in the presence of a rare
abnormality. The argument often used is that advanced reconstruction
techniques such as compressed sensing have not required validating against
very rare abnormalities, however when deep neural networks get involved you
have a nearly universal function approximator whose behavior is nowhere near
as bounded.

The FDA will probably set the low bar of validating that the reconstruction
algorithm fares well in the face of a few abnormalities and call it a day,
instead of the proper (admittedly infeasible) validation of testing against
all abnormalities that the reconstruction algorithm will encounter. Facebook
will probably happily jump over this low bar, and patients will get hurt.

~~~
LeanderK
I never understood these arguments. Isn't it like claiming self-driving cars
via neural networks would never be possible because you can't test that the
neural network would take the correct decision in every situation?

I view the whole issue stochastically, with the immediate aim being to make
(significantly) fewer errors than the current approach, which is having a
human decide. I don't claim that I can design an experiment which could selve
as an indication whether we are improving upon human judgments, but I think
this should be the goal.

Reflecting upon my view, i think it comes from the experience of training ml-
algorithms. You are always minimizing errors, but you goal is almost never to
make 0 errors, because often your data is noisy and you are probably
overfitting. I know the medical enviroments are more sensitive, but I can't
really wrap my head around how we could design a learning algorithms that does
not make any error and works on all abnormalies. I think it will always
missclassify.

Rephrasing my argument: I think the approval should be given if an significant
expected improvement over the distribution of real-life abnormalies can be
detected and not over the uniform-dsitribution over all abnormalies.

EDIT: detecting out-of-distribution samples is hard and I don't think this is
a solution and leads to a false sense of security.

~~~
PavlikPaja
It isn't. It obviously is possible to drive a car. The problem here is you're
trying to reconstruct image from less data than what is fundamentally
possible. Compressed sensing is already as low as you can get, unless you
prove it is possible to reconstruct the image accurately from even less data,
then it's probably pointless to try use neural networks to decode it, and it's
actually a bad idea anyway.

The problem with neural networks is that they can reconstruct something that
looks "normal", not necessarily something that is accurate. The more abnormal
the scan, the more likely it will get reconstructed as something that looks
perfectly fine even when it's not.

~~~
LeanderK
I really don't know anything about MRI and their usage, i was mainly
criticising that his arguments sounded like a lot of the arguments I've heard
against other usages of ML-Algorithms.

What I meant: There's probably a medical reason why you want such a product
and if a more readily available MRI saves (really significantly) more lives
than the chance that it might miss some abnormalies which could lead to death,
then I think we should allow it. That's what I meant with a stochastic view.
If we, for example, only have a few scans per hospital available because the
chance might exist that we missclassify something and lots of people get worse
or delayed treatment because they are not high enough on the priority list to
get access to a super-resulution MRI with a fidelity they don't really need
(again, i don't know anything, just to illustrate my point), then I think
something is wrong.

His argument just sounded dismissive without giving a, to my uninformed point
of view, valid reason.

------
sgt101
Presumably there will have to be a proper medical trial with a declared up
front hypothesis and double blind testing?

------
rwmj
I couldn't read the article, but I wonder if the NHS will get this for free,
since we "gave" them all our data in order to train the model.

~~~
dmurray
> A DeepMind spokesperson said that if the research results in a product that
> passes clinical trials and regulatory approvals, doctors at Moorfields will
> be able to use the product for free for an initial period of five years.

------
make3
I thought DeepMind's actual first commercial product is Google Cloud Services
premium voice generation service (made from WaveNet)

~~~
sergers
Maybe it's semantics?

Google cloud services voice generation uses models/algorithms developed by
DeepMind, it's not directly a DeepMind product.

------
mrhappyunhappy
What’s next, an AI to make a judicial verdict? Seriously though, this is very
exciting. I can imagine a future in which medical diagnosis is damn near 100%
accurate - all you need to do is lay down and get scanned and smelled by an
all in one machine and your diagnosis is displayed with recommended treatment.
If treatable with drugs or molecular repair, done on the spot.

~~~
gervase
> an AI to make a judicial verdict?

There is some discussion towards this direction:
[https://www.wired.com/story/can-ai-be-fair-judge-court-
eston...](https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-
thinks-so/)

~~~
pm90
I hope AI can assist in the knowledge gathering phases, and for civil cases I
think it can be a huge value add.

For criminal cases though: the current judicial system is wayy too punitive.
And an AI that would apply the letter of the law would likely criminalize
society even more than what has already happened so far.

One happy scenario would be if the laws were more responsive and were changed
to not be so punitive since the AI would have a high rate of conviction.

But then, you might have other failure scenarios. Rich people buying AI
programmers and hackers to mess with the system.

Its a constant game of outsmarting the latest tech.

------
signa11
didn't deep-mind folks tried controlling one of the data-centers which ended
up reducing the overall power requirements a while back ? i was hoping that
they would go that route...

------
olalonde
Unrelated but anyone knows why the URL is not "SEO friendly"? I'm guessing
they don't bother because the content is behind a paywall anyways?

------
ionwake
How does one invest in deepmind ?

~~~
onion2k
Buy shares in GOOGL.

------
gulbrandr
For those not being able to read the article, you can use this "no paywall"
bookmark:

    
    
        javascript:window.location.href='https://m.facebook.com/l.php?u='+encodeURIComponent(window.location.href);

------
escapecharacter
I’m offering name suggestions:

DeepEye AEye

------
baylearn
April Fools!

