Hacker News new | past | comments | ask | show | jobs | submit login
AI breakthrough could spark medical revolution (bbc.com)
42 points by alexrustic 5 days ago | hide | past | favorite | 25 comments





To paraphrase a drug development chemist (see https://blogs.sciencemag.org/pipeline/archives/2019/09/25/wh...)...

No, it won't. Protein structure determination isn't exactly a rate-limiting step in drug development, especially since the computational chemistry models of binding aren't up to snuff when it comes to being substitutable for time-consuming animal models.

This press release and upselling of the protein structure determination advancements is a lot like the human genome project; while it was incredibly useful in its own right, everyone who predicted that it would unleash improvements in human health has been proved woefully wrong.


I have already seen reports from biologists about AlphaFold solving protein structures that have resisted laboratory methods for decades [1,2]. It's hard for me to see why this is not a transformative development for (one part of) basic biology. Note that the BBC article doesn't actually say that it will transform drug discovery!

I know HN is inclined to be cynical about big claims, but it is not hard to find computational biologists (unaffiliated with DeepMind) saying that this is indeed a revolutionary advance for biology:

* Mohammed AlQuraishi, a computational biologist at Columbia University in New York City and a CASP participant [...]’s strong hunch is that AlphaFold will be transformational.

* “It’s a game changer,” says Andrei Lupas, an evolutionary biologist at the Max Planck Institute for Developmental Biology in Tübingen, Germany, who assessed the performance of different teams in CASP. [...] “This will change medicine. It will change research. It will change bioengineering. It will change everything,” Lupas adds.

[1] "An AlphaFold prediction helped to determine the structure of a bacterial protein that Lupas’s lab has been trying to crack for years. Lupas’s team had previously collected raw X-ray diffraction data, but transforming these Rorschach-like patterns into a structure requires some information about the shape of the protein. Tricks for getting this information, as well as other prediction tools, had failed. “The model from group 427 gave us our structure in half an hour, after we had spent a decade trying everything,” Lupas says." https://www.nature.com/articles/d41586-020-03348-4

[2] (unconfirmed) https://twitter.com/halvorz/status/1417535090385580033


Scientists themselves touted the HG project the same way. It's great and exciting but no one's curing cancer tomorrow.

Just to clarify every researcher uses genome sequences every day as part of their research, so the human genome project did change how we do research as will this breakthrough. but It doesn't solve the real bottleneck problems that have prevented us from making real breakthroughs.


The ability to rapidly sequence genes seems like it has been pretty revolutionary for medicine. Many cancers are treated based on their sequencing, and diseases are tracked through their genetic sequencing. For example, it's hard to imagine their being a lot of reporting on the variants of COVID before genetic sequencing was possible.

If we want to cure more cancers, we will eventually have to target drugs at the level of individual patient. Traditional animal models aren't going to work there. If medical ethicists in the US insist on requiring animal models into the indefinite future, cancer patients will have no options besides travelling to a foreign country to get their cancer treated.

No it won't. This is uninformed hype. HN should ban articles from mainstream media on hard science topics.

Easiest way to lose touch with the general populace! :-)

In an ideal world, we should be learning from more authoritative sources, so we can keep a finger on the pulse and filter out BS when needed. Participating with fellow people to drive consensus is as important as growing ones own knowledge/understanding.


I tried looking for the conclusion, but I think the journal article preview does not show it.

AI being used for something doesn't mean it will be used to "transform" something. Lots of things can block the adoption of new technology due to the bias of practitioners or entrenched policy of "that's the way things are done". Healthcare doesn't switch from using fax machines for lots of reasons.

If BBC makes a giant leap that AI will "transform" biology, the burden is on them to hint at why biologists would find it reliable and accessible enough to warrant inserting it into their toolkits. How easy is the adoption of ML models in the hard sciences today?

Lastly, I don't read many ML papers but as an outsider, the "Code Availability" section seems odd. It may be cut off from the article preview I saw, but is it possible to get access to a singular code base that the research used to generate results? The section's first few paragraphs are "here are the libraries we used"


Poster changed the headline from "could" to "will". EDIT: It's fixed now.

I think it was BBC that changed the headline, not the poster (Twitter grabbed "will" here https://twitter.com/Bill_Gardner/status/1418265769775968258).

It will almost certainly lead to more drugs...whether those drugs help actual patients live longer or better is the important question.

It would make for a great dystopian novel where someone gives an AI the goal to "help actual patients live longer" and it does that in typical AI fashion, by putting people in eternal stasis or manually keeping their blood flowing so that they are technically "alive" but living in a nightmare. Metric achieved!

In general I think that AI is a genie that will interpret your wishes in the worst way possible to achieve your goal. If you're specific enough then great. But it's hard to know whether you were or not until the genie is out of the bottle.


That's typical Terminator FUD. That's not how AI works. That's not how any of this works.

I don't think you understand what I meant.

If you take AI to mean the technology we currently have, then there are plenty of times when optimizers will cheat at a game to achieve the best metrics. Just recently I read where a Tetris AI, tasked with making the game last as long as possible, just paused the game. That's the kind of thing I'm talking about: humans underspecifying goals, and a machine learning algorithm exploiting the specification to achieve the goal.


To be clear, it paused the game when it was about to lose, it still played normal Tetris

I don't think that diminishes the point though.

Is it my imagination or has the quality of BBC's reporting diminished in recent years? Their BBC America news show appears to be operating on a shoestring these days, with commentary emanating from only a handful of reporters who are more often located at a second studio than a remote locale.

And stories on BBC's website are often shallow and mere echoes of reportage originating elsewhere -- too often arising unfiltered from sources promoting their product/research as "game-changing revolutionary advances", as Google is doing here.


Unfortunately it is not your imagination, their reporting is becoming mere blogspam.

The main UK channels here seems to be somewhat decent but should be taken with a pinch of typical BBC bias, perhaps the news website is where they let their work-experience folk loose.


Sadly, BBCNews.com has been blogspam for the better part of a decade already. I noticed it years ago when increasing numbers of their headlines were simply quotes from public figures (often taken out of context) with some prepending to create more of an alarmist feel.

I think it happened right when there were key changes, either the online editor, or a public announcement they would earn more of their own revenue, or a combination of the two. Circa 2013 if I remember correctly.

Sadly of course, if a media property has to play the same eyeballs-ads-game every other online junkheap does, then inevitably the quality dives vey quickly in favour of cheap thrills and assorted clickbait.


Reporting everywhere is diminishing because people want short spammable memes that validate their opinions and beliefs over news that can enlighten them. The internet has decimated longform journalism and the market no longer exists for it.

From my perspective, longform information has moved to podcasts.

This

I watched the live BBC coverage of the London 2011 riots. Their live coverage was so completely out of touch and basically wrong it was disillusioning.

Reporters and anchors were completely unwilling to discuss the issue as a whole - each fire was treated individually, the anchor repeating every few minutes "there seems to be a new fire, but, it is not related in any way to the other 200 fires set this evening....." a few minutes pass "we have shocking new information that a fire has started, as of right now, we have no indication that it is related to the rest of the fires"....few minutes later "thugs are starting fires elsewhere, it seems totally unrelated to the other fires"

Completely unwilling to discuss poverty or any of the reasons people riot, instead, treating each incident as individual and not remotely related to the rest. It's like they want to pretend that everything is totally great in the UK like always and any problem you think you see is just temporary and unrelated to any other problem you see.

BBC is allergic to anything that is systematically wrong. No greater vision for what is happening in the world, just demeaning words for the poor and controlling-the-narrative that the UK is awesome and can do nothing wrong, systematically at least.


I wrote this elsewhere but I'll copy it here:

Nothing remotely new since the very birth of the BBC. It's always had an extremely paternalistic streak

>John Reith [the creator of the BBC] and the BBC, with support from the Crown, determined the universal needs of the people of Britain and broadcast content according to these perceived standards. Reith effectively censored anything that he felt would be harmful, directly or indirectly. While recounting his time with the BBC in 1935, Raymond Postgate claims that BBC broadcasters were made to submit a draft of their potential broadcast for approval.

And it's also never been too bothered about telling the truth

>While the BBC tends to characterise its coverage of the [1926] general strike by emphasising the positive impression created by its balanced coverage of the views of government and strikers, Jean Seaton, Professor of Media History and the Official BBC Historian, has characterised the episode as the invention of "modern propaganda in its British form". Reith argued that trust gained by 'authentic impartial news' could then be used. Impartial news was not necessarily an end in itself.

Or of remaining politically neutral

>The resulting coverage of both striker and government viewpoints impressed millions of listeners who were unaware that the PM had broadcast to the nation from Reith's home, using one of Reith's sound bites inserted at the last moment, or that the BBC had banned broadcasts from the Labour Party and delayed a peace appeal by the Archbishop of Canterbury. Supporters of the strike nicknamed the BBC the BFC for British Falsehood Company.


I actually thought the Question Time after the riots was the only time I’d ever seen that programme produce a useful debate.

It’s in the nature of the BBC that you’re not going to get opinionated independent research on issues which are political, they report directly relevant factual information, and then they report when someone else does follow up research, for instance this film produced by BBC Newsnight and the Guardian, based on research by the Joseph Rowntree Foundation:

https://www.theguardian.com/uk/video/2011/dec/05/reading-the...




Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: