No, it won't. Protein structure determination isn't exactly a rate-limiting step in drug development, especially since the computational chemistry models of binding aren't up to snuff when it comes to being substitutable for time-consuming animal models.
This press release and upselling of the protein structure determination advancements is a lot like the human genome project; while it was incredibly useful in its own right, everyone who predicted that it would unleash improvements in human health has been proved woefully wrong.
I know HN is inclined to be cynical about big claims, but it is not hard to find computational biologists (unaffiliated with DeepMind) saying that this is indeed a revolutionary advance for biology:
* Mohammed AlQuraishi, a computational biologist at Columbia University in New York City and a CASP participant [...]’s strong hunch is that AlphaFold will be transformational.
* “It’s a game changer,” says Andrei Lupas, an evolutionary biologist at the Max Planck Institute for Developmental Biology in Tübingen, Germany, who assessed the performance of different teams in CASP. [...] “This will change medicine. It will change research. It will change bioengineering. It will change everything,” Lupas adds.
 "An AlphaFold prediction helped to determine the structure of a bacterial protein that Lupas’s lab has been trying to crack for years. Lupas’s team had previously collected raw X-ray diffraction data, but transforming these Rorschach-like patterns into a structure requires some information about the shape of the protein. Tricks for getting this information, as well as other prediction tools, had failed. “The model from group 427 gave us our structure in half an hour, after we had spent a decade trying everything,” Lupas says." https://www.nature.com/articles/d41586-020-03348-4
 (unconfirmed) https://twitter.com/halvorz/status/1417535090385580033
Just to clarify every researcher uses genome sequences every day as part of their research, so the human genome project did change how we do research as will this breakthrough. but It doesn't solve the real bottleneck problems that have prevented us from making real breakthroughs.
In an ideal world, we should be learning from more authoritative sources, so we can keep a finger on the pulse and filter out BS when needed. Participating with fellow people to drive consensus is as important as growing ones own knowledge/understanding.
AI being used for something doesn't mean it will be used to "transform" something. Lots of things can block the adoption of new technology due to the bias of practitioners or entrenched policy of "that's the way things are done". Healthcare doesn't switch from using fax machines for lots of reasons.
If BBC makes a giant leap that AI will "transform" biology, the burden is on them to hint at why biologists would find it reliable and accessible enough to warrant inserting it into their toolkits. How easy is the adoption of ML models in the hard sciences today?
Lastly, I don't read many ML papers but as an outsider, the "Code Availability" section seems odd. It may be cut off from the article preview I saw, but is it possible to get access to a singular code base that the research used to generate results? The section's first few paragraphs are "here are the libraries we used"
In general I think that AI is a genie that will interpret your wishes in the worst way possible to achieve your goal. If you're specific enough then great. But it's hard to know whether you were or not until the genie is out of the bottle.
If you take AI to mean the technology we currently have, then there are plenty of times when optimizers will cheat at a game to achieve the best metrics. Just recently I read where a Tetris AI, tasked with making the game last as long as possible, just paused the game. That's the kind of thing I'm talking about: humans underspecifying goals, and a machine learning algorithm exploiting the specification to achieve the goal.
And stories on BBC's website are often shallow and mere echoes of reportage originating elsewhere -- too often arising unfiltered from sources promoting their product/research as "game-changing revolutionary advances", as Google is doing here.
The main UK channels here seems to be somewhat decent but should be taken with a pinch of typical BBC bias, perhaps the news website is where they let their work-experience folk loose.
I think it happened right when there were key changes, either the online editor, or a public announcement they would earn more of their own revenue, or a combination of the two. Circa 2013 if I remember correctly.
Sadly of course, if a media property has to play the same eyeballs-ads-game every other online junkheap does, then inevitably the quality dives vey quickly in favour of cheap thrills and assorted clickbait.
Reporters and anchors were completely unwilling to discuss the issue as a whole - each fire was treated individually, the anchor repeating every few minutes "there seems to be a new fire, but, it is not related in any way to the other 200 fires set this evening....." a few minutes pass "we have shocking new information that a fire has started, as of right now, we have no indication that it is related to the rest of the fires"....few minutes later "thugs are starting fires elsewhere, it seems totally unrelated to the other fires"
Completely unwilling to discuss poverty or any of the reasons people riot, instead, treating each incident as individual and not remotely related to the rest. It's like they want to pretend that everything is totally great in the UK like always and any problem you think you see is just temporary and unrelated to any other problem you see.
BBC is allergic to anything that is systematically wrong. No greater vision for what is happening in the world, just demeaning words for the poor and controlling-the-narrative that the UK is awesome and can do nothing wrong, systematically at least.
Nothing remotely new since the very birth of the BBC. It's always had an extremely paternalistic streak
>John Reith [the creator of the BBC] and the BBC, with support from the Crown, determined the universal needs of the people of Britain and broadcast content according to these perceived standards. Reith effectively censored anything that he felt would be harmful, directly or indirectly. While recounting his time with the BBC in 1935, Raymond Postgate claims that BBC broadcasters were made to submit a draft of their potential broadcast for approval.
And it's also never been too bothered about telling the truth
>While the BBC tends to characterise its coverage of the  general strike by emphasising the positive impression created by its balanced coverage of the views of government and strikers, Jean Seaton, Professor of Media History and the Official BBC Historian, has characterised the episode as the invention of "modern propaganda in its British form". Reith argued that trust gained by 'authentic impartial news' could then be used. Impartial news was not necessarily an end in itself.
Or of remaining politically neutral
>The resulting coverage of both striker and government viewpoints impressed millions of listeners who were unaware that the PM had broadcast to the nation from Reith's home, using one of Reith's sound bites inserted at the last moment, or that the BBC had banned broadcasts from the Labour Party and delayed a peace appeal by the Archbishop of Canterbury. Supporters of the strike nicknamed the BBC the BFC for British Falsehood Company.
It’s in the nature of the BBC that you’re not going to get opinionated independent research on issues which are political, they report directly relevant factual information, and then they report when someone else does follow up research, for instance this film produced by BBC Newsnight and the Guardian, based on research by the Joseph Rowntree Foundation: