I think I disagree with most of the comments here stating it’s premature to give the Nobel to AlphaFold.
I’m in biotech academia and it has changed things already. Yes the protein folding problem isn’t “solved” but no problem in biology ever is. Comparing to previous bio/chem Nobel winners like Crispr, touch receptors, quantum dots, click chemistry, I do think AlphaFold already has reached sufficient level of impact.
It also proved that deep learning models are a valid approach to bioinformatics - for all its flaws and shortcomings, AlphaFold solves arbitrary protein structure in minutes on commodity hardware, whereas previous approaches were, well, this: https://en.wikipedia.org/wiki/Folding@home
A gap between biological research and biological engineering is that, for bioengineering, the size of the potential solution space and the time and resources required to narrow it down are fundamental drivers of the cost of creating products - it turns out that getting a shitty answer quickly and cheaply is worth more than getting the right answer slowly.
AlphaFold and Folding@home attempt to solve related, but essentially different, problems. As I already mentioned here, protein structure prediction is not fully equivalent to protein folding.
Yeah, this is what I mean by "a shitty answer fast" - structure prediction isn't a canonical answer, but it's a good enough approximation for good enough decision-making to make a bunch of stuff viable that wouldn't be otherwise.
I agree with you, though - they're two different answers. I've done a bunch of work in the metagenomics space, and you very quickly get outside areas where Alphafold can really help, because nothing you're dealing with is similar enough to already-characterized proteins for the algorithm to really have enough to draw on. At that point, an actual solution for protein folding that doesn't require a supercomputer would make a difference.
> this is what I mean by "a shitty answer fast" - structure prediction isn't a canonical answer
A proper protein structural model is an all-atom representation of the macromolecule at its global minimum energy conformation, and the expected end result of the folding process; both are equivalent and thus equally canonical. The “fast” part, i.e., the decrease in computational time comes mostly from the heuristics used for conformational space exploration. Structure prediction skips most of the folding pathway/energy funnel, but ends up at the same point as a completed folding simulation.
> At that point, an actual solution for protein folding that doesn't require a supercomputer would make a difference.
Or more representative sequences and enough variants by additional metagenomic surveys, for example. Of course, this might not be easily achievable.
> ends up at the same point as a completed folding simulation.
Well, that's the hope, at least.
> Or more representative sequences and enough variants by additional metagenomic surveys, for example. Of course, this might not be easily achievable.
For sure, but for ostensibly profit-generating enterprises, it's pretty much out of the picture.
I think the reason an actual computational solution for folding is interesting is that the existing set of experimentally verified protein structures are for proteins we could isolate and crystalize (which is also the training set for AlphaFold, so that's pretty much the area its predictions are strongest, and even within that, it's only catching certain conformations of the proteins) - even if you can get a large set of metagenomic surveys and a large sample of protein sequences, the limitations on the methods for experimentally verifying the protein structure means we're restricted to a certain section of the protein landscape. A general purpose computationally tractable method for simulating protein folding under various conditions could be a solution for those cases where we can't actually physically "observe" the structure directly.
Most proteins don't fold to their global energy minimum- they fold to a collection of kinetically accessible states. Many proteins fail to reach the global minimum because of intermediate barriers from states that are easily reached from the unfolded state.
Attempting to predict structures using mechanism that simulate the physical folding process waste immense amount of energy and time sampling very uninteresting areas of space.
You don't want to use a supercomputer to simulate folding; it can be done with a large collection of embarassingly parallel machines much more cheaply and effectively. I proposed a number of approaches on supercomputers and was repeatedly told no because the codes didn't scale to the full supercomputer, and supercomputers are designed and built for codes that scale really well on non-embarassingly parallel problems. This is the reason I left academia for google- to use their idle cycles to simulate folding (and do protein design, which also works best using embarassingly parallel processing).
As far as I can tell, only extremely small and simple proteins (like ribonuclease) fold to somewhere close to their global energy minimum.
Except, you know, if you're trying to understand the physical folding process...
There are lots of enhanced sampling methods out there that get at the physical folding process without running just vanilla molecular dynamics trajectories.
> It also proved that deep learning models are a valid approach to bioinformatics
A lot of bioinformatics tools using deep learning appeared around 2017-2018. But rather than being big breakthroughs like AlphaFold, most of them were just incremental improvements to various technical tasks in the middle of a pipeline.
and since a lot of those tools are incremental improvements they disappeared again, imho - what's the point for 2% higher accuracy when you need a GPU you don't have?
Not many DL based tools I see these days regularly applied in genomics. Maybe: Tiara for 'high level' taxonomic classification, DeepVariant in some papers for SNP calling, that's about it? Some interesting gene prediction tools coming up like Tiberius. AlphaFold, of course.
Lots of papers but not much day-to-day usage from my POV.
Most Oxford Nanopore basecallers use DL these days. And if you want a high quality de novo assembly, DL based methods are often used for error correction and final polishing.
There are a lot of differences between the cutting-edge methods that produce the best results, the established tools the average researcher is comfortable using, and whatever you are allowed to use in a clinical setting.
AlphaFold doesn’t work for engineering though. Getting a shitty answer ends up being worse than useless.
It seems to really accelerate productivity of researchers investigating bio molecules or molecules very similar to existing bio molecules. But not de novo stuff.
that's just not true. In a lot of cases in engineering, there are 10000000 possibilities, and deeplearning shows you 100 potentially more promising ones to double check, and that's worth huge amounts of money.
In a lot of cases deep learning is able to simulate complex system at a precision that is more than precise enough, that otherwise would not be tracktable (like is the case with alphafold), and again this is especially valuable if you can double check the output.
Ofc, in the field of language and vision and in a lot of other fields, deep learning is straight up the only solution.
Eh, in many cases for actual customer-facing commercial work, they're sticking remarkably close to stuff that's in genbank/swissprot/etc - well characterized molecules and pathways, because working with genuinely de novo stuff is difficult and expensive. In those cases, Alphafold works fine - it always requires someone to actually look at the results and see whether they make sense or not, but also "the part of the solution space where the tools work" is often a deciding factor in what approach is chosen.
Agreed. There are too many different directions of impact to point out explicitly, so I'll give a short vignette on one of the most immediate impacts, which was the use in protein crystallography. Many aspiring crystallographers correctly reorganized their careers following AlphaFold2, and everyone else started using it for molecular replacement as a way to solve the phase problem in crystallography; the models from AF2 allowed people to resolve new crystal structures from data measured years prior to the AF2 release.
I agree that it’s not premature, for two reasons: First, it’s been 6 years since AlphaFold first won CASP in 2018. This is not far from the 8 years it took from CRISPR’s first paper in 2012 to its Nobel Prize in 2020. Second, AlphaFold is only half the prize. The other half is awarded for David Baker’s work since the 1990s on Rosetta and RoseTTAFold.
I agree. For those not in biotech, protein folding has been the holy grail for a long time, and AlphaFold represents a huge leap forward. Not unlike trying to find a way to reduce NP to P in CS. A leap forward there would be huge, even if it came short of a complete solution.
> Let me get the most important question out of the way: is AlphaFold’s advance really significant, or is it more of the same? I would characterize their advance as roughly two CASPs in one
Crispr is widely used and there are even therapies approved based on it, you can actually buy TVs that use quantum dots and click chemistry has lots of applications (bioconjugation etc.), but I don't think we have seen that impact from AlphaFold yet.
There's a lot of pharma companies and drug design startups that are actively trying to apply these methods, but I think the jury is still out for the impact it will finally have.
AlphaFold is excellent engineering, but I struggle calling this a breakthrough in science. Take T cell receptor (TCR) proteins, which are produced pseudo-randomly by somatic recombination, yielding an enormous diversity. AlphaFold's predictions for those are not useful. A breakthrough in folding would have produced rules that are universal. What was produced instead is a really good regressor in the space of proteins where some known training examples are closeby.
If I was the Nobel Committee, I would have waited a bit to see if this issue aged well. Also, in terms of giving credit, I think those who invented pairwise and multiple alignment dynamic programming algorithms deserved some recognition. AlphaFold built on top of those. They are the cornerstone of the entire field of biological sequence analysis. Interestingly, ESM was trained on raw sequences, not on multiple alignments. And while it performed worse, it generalizes better to unseen proteins like TCRs.
The value in BLAST wasn't in its (very fast) alignment implementation but in the scoring function, which produced calibrated E-values that could be used directly to decide whether matches were significant or not. As a postdoc I did an extremely careful comparison of E-values to true, known similarities, and the E-values were spot on. Apparently, NIH ran a ton of evolution simulations to calibrate those parameters.
For the curious, BLAST is very much like pairwise alignment but uses an index to speed up by avoiding attempting to align poorly scoring regions.
BLAST estimates are derived from extreme value theory and large deviations, which is a very elegant area of probability and statistics.
That's the key part, I think, being able to estimate how unique each alignment is without having to simulate the null distribution, as it was done before with FASTA.
The index also helps, but the speedup comes mostly from the other part.
Well I'm sure one could look at number of published papers etc, but that metric is a lot to do with hype and I see it as a lagging indicator.
A better one is seeing my grad-school friends with zero background in comp-sci or math, presenting their cell-biology results with AlphaFold in conferences and at lab meetings. They are not protein folding people either- just molecular biologists trying to present more evidence of docking partners, functional groups in their pathway of interest.
It reminds me of when Crispr came out. There were ways to edit DNA before Crispr, but its was tough to do right and required specialized knowledge. After Crispr came out, even non-specialists like me in tangential fields could get started.
In both academic and industrial settings, I've seen an initial spark of hope about AlphaFold's utility being replaced with a resignation that it's cool, but not really useful. Yet in both settings it continued as a playing card for generating interest.
There's an on-point blog-post "AI and Biology" (https://www.science.org/content/blog-post/ai-and-biology) which illustrates why AlphaFold's real breakthrough is not super actionable for creating further bio-medicinal applications in a similar vein.
That article explains why AI might not work so well further down the line biology discoveries, but I still think alphafold can really help with the development of small molecule therapies that bind to particular known targets and not to others, etc.
The thing with available ligand + protein recorded structures is that they are much, much more sparse than available protein structures themselves (which are already kinda sparse, but good enough to allow AlphaFold). Some of the commonly-used datasets for benchmarking structure-based affinity models are so biased you can get a decent AUC by only looking at the target or ligand in isolation (lol).
Docking ligands doesn't make for particularly great structures, and snapshot structures really miss out on the important dynamics.
So it's hard for me to imagine how alphafold can help with small molecule development (alphafold2 doesn't even know what small molecules are). I agree it totally sounds plausible in principle, I've been in a team where such an idea was pushed before it flopped, but in practice I feel there's much less use to extract from there than one might think.
EDIT: To not be so purely negative: I'm sure real use can be found in tinkering with AlphaFold. But I really don't think it has or will become a big deal in small drug discovery workflows. My PoV is at least somewhat educated on the matter, but of course it does not reflect the breadth of what people are doing out there.
But Crispr actually edited genes. How much of this theoretical work was real, and how much was slop? Did the grad students actually achieve confirmation of their conformational predictions?
Surprisingly, yes the predicted structures from AlphaFold had functional groups that fit with experimental data of binding partners and homologues. While I don't know whether it matched with the actual crystallization, it did match with those orthogonal experiments (these were cell biology, genetics, and molecular biology labs, not protein structure labs, so they didn't try to actually crystalize the proteins themselves).
It solidly answered the question: "Is evolutionary sequence relationship and structure data sufficient to predict a large fraction of the structures that proteins adopt". the answer, surprising few, is that the data we have indeed can be used to make general predictions (even outside of the training classes), and also surprising many, that we can do so with a minimum of evolutionary sequence data.
That people are arguing about the finer details of what it gets wrong is support for its value, not a detriment.
That's a bit like saying that the invention of the airplane proved that animals can fly, when birds are swooping around your head.
I mean, sure, prior to alphafold, the notion that sequence / structure relationship was "sufficient to predict" protein structure was merely a very confident theory that was used to regularly make the most reliable kind of structure predictions via homology modeling (it was also core to Rosetta, of course).
Now it is a very confident theory that is used to make a slightly larger subset of predictions via a totally different method, but still fails at the ones we don't know about. Vive la change!
I think an important detail here is that Rosetta did something beyond traditional homology models- it basically shrank the size of the alignments to small (n=7 or so?) sequences and used just tiny fragments from the PDB, assembled together with other fragments. That's sort of fundamentally distinct from homology modelling which tends to focus on much larger sequences.
3-mers and 9-mers, if I recall correctly. The fragment-based approach helped immensely with cutting down the conformational search space. The secondary structure of those fragments was enough to make educated guesses of the protein backbone’s, at a time where ab initio force field predictions struggled with it.
Yes, Rosetta did monte carlo substitution of 9-mers, followed by a refinement phase with 3-mers. Plus a bunch of other stuff to generate more specific backbone "moves" in weird circumstances.
In order to create those fragment libraries, there was a step involving generation of multiple-sequence alignments, pruning the alignments, etc. Rosetta used sequence homology to generate structure. This wasn't a wild, untested theory.
I don't know that I agree that fragment libraries use sequence homology. From my understanding of it, homology implies an actual evolutionary relationship. Wheras fragment libraries instead are agnostic and instead seem to be based on the idea that short fragments of non-related proteins can match up in sequence and structure space. Nobody looks at 3-mers and 9-mers in homology modelling; it's typically well over 25 amino acids long, and there is usually a plausible whole-domain (in the SCOP terminology).
But, the protein field has always played loose with the term "homology".
Rosetta used remote sequence homology to generate the MSAs and find template fragments, which at the time was innovative. A similar strategy is employed for AlphaFold’s MSAs containing the evolutionary couplings.
Interestingly, the award was specifically for the impact of AlphaFold2 that won CASP 14 in 2020 using their EvoFormer architecture evolved from the Transformer, and not for AlphaFold that won CASP 13 in 2018 with a collection of ML models each separately trained, and which despite winning, performed at a much lower level than AlphaFold2 would perform two years later.
I wasn't expecting to see David Baker in the list (just Demis and John). But I'm really glad to see it... David is a great guy.
At CASP (the biannual protein structure prediction competition) around 2000, I sat down with David and told him that eventually machine learning would supplant humans at structure prediction (at the time Rosetta was already the leading structure prediction/design tool, but was filled with a bunch of ad-hoc hand-coded features and optimizers). he chuckled and said he doubted it, every time he updated the Rosetta model with newer PDB structures, the predictions got worse.
I will say that the Nobel committee needs to stop saying "protein folding" when they mean "protein structure prediction".
The models and tools designed for the CASP competition were an example of running around the solution space at a glacial pace and getting stuck in local minima. I can't speak for Rosetta by my labmates had fairly successful tools that usually ranked right behind Baker's lab, and they were plagued by issues where the most successful models had impossible or idiosyncratic terms in them.
For example, a very successful folding model had the signs reversed on hydrophobic and some electrostatic interactions. It made no sense physically but it gave a better prediction than competing models, and it was hard to move away from because it ranked well in CASP.
Refreshingly good. Bakers "early" (not really early, but earlier than AlphaFold) work (having humans with no background solve folds) really laid the groundwork to proving that heuristic methods were likely to outperform physical forcefield and ab initio/DFT methods for structure prediction. And AI structure prediction if nothing else is heuristic protein folding.
They had to put David Baker on here, his work on protein design if nothing else was ground breaking. I've expected him to win it at some point in a, it's not a matter of if but of when.
Demis Hasabis has a really interesting and unusual CV for a nobel laureate [1], he started his career in AI game programming (he worked e.g. on Popoulous II, Syndicate, Theme Park for Bullfrog, and later for Lionhead Studios on Black & White) before doing a PhD in neuroscience, becoming an entrepreneur and starting DeepMind. I would say this is a refreshing and highly uncommon pick for a nobel prize, really cool to see that you don't have to be a university professor anymore to do this kind of impactful research.
I'm always interested in hearing about these people who go and get a PhD in an unrelated field to their original studies, often years after leaving university and working in an industry. Here it says Hasabis did an undergraduate degree in a computer science program, and them spent a decade working on computer games at studios, and then somehow just rocked up to a university and asked to do a PhD in neuroscience.
I feel if I tried to do that in the US- (where I got a masters degree in engineering, spent a 15 yrs as an aerospace engineer,)- tried to go back and ask to do a PhD in, say, Physics - I'd be promptly told to go fuck myself (or, fuck myself but then enroll in a new undergrad or maaaybe graduate program only after re-taking GRE's. Straight PhD? Never heard it work like that.)
There is a lot of variation in how PhD studies work. In some places, you are just a faceless candidate applying to a department, which discourages you from contacting the faculty before you are admitted. In other places, you must convince a professor to supervise and fund you before you are even allowed to apply. Some universities require you (or your supervisor) pay tuition fees for a number of years before you can graduate, while others don't care what you do, as long as you can produce a thesis that meets their standards.
You can jump from social sciences to STEM. Your formal admission can wait for a year or two after you actually started. Or you can move to another university and get a PhD in a few months, because the administrative requirements in the original one were too unreasonable. These things happen, because universities are independent organizations that like doing things their own way.
PhD is a thankless, low paid position with insane hours and zero guaranteed return. Outside of a few elite programs and universities getting into PhD program is fairly easy - they take anyone qualified.
That may be more true in a super crowded and hot area, like AI now where reputable profs get dozens of PhD applicants per position, who all have already published relevant works in the field at top venues during their masters.
In more chill fields where the waters are relatively calm, this may be less of an issue.
But let's also consider the fact that Hassabis did his undergrad at the University of Cambridge, likely with excellent results. He wasn't just some random programmer.
likely because hassabis was a child prodigy.
Was a chess master at 13. Lead Cambridge chess team. its not surmise to assume that demis had impeccable school record
Why do you think that? It’s not my experience. At the grad school level they’ll take anyone who can do the work and is interested. Outside experience, even in unrelated fields, is often a plus. Grad students just out of undergrad have no idea how the world works.
I made the jump from mostly-math undergrad to materials science PhD (close to chemistry/physics, if you don't know the field). I was welcomed with open arms.
If you've got any math-heavy STEM graduate degree, you can likely jump into a physics PhD. You might need to take some senior-level undergraduate courses to catch up, but the transition is quite doable. At some point, your overall intelligence, enthusiasm, and work ethic matter more than your specific background.
What? Physics programs don't work like that, at least not T1-2 ones I know about. A physics PhD is not pay to play and anyone thinking it is would fail out on their qual at every university I'm familiar with.
I have known a several people who made the jump from Computer Science to Biology at graduate school. Usually, it's either via genomics or neuroscience (as in Hassabis' case), where there is a large need for people who can do data crunching or computational modelling.
Are we talking about Einstein? If I remember correctly, according to Walter Isaacson, Einstein managed to get so many good papers out not despite, but because he was not working for an university. It gave him more freedom to reject existing ideas. Also the years I can find on Wikipedia do not seem to support your claim. He started as a clerk in 1903, and had his miracle year and submitted his PhD dissertation in 1905.
From what I've read, he explicitly sought a position that would give him time to work on his physics ideas. Whether he would or not would have achieved the same working for a university is merely his opinion. In particular, it was not the case that he was working for one, and found it to be incompatible with his research vision, and left academia to become a patent clerk.
Because having some degree of runway is almost always necessary but never sufficient. Thousands of Americans receive similar amounts of money from their parents in the form of inheritance of the family home and other major assets. Only one took windfall of that size and created Amazon.
The "necessary, but not sufficient" is unintuitive to most people. Billionaires who come from working class families are almost unheard of, but probably more than half the self made (for a definition, something like multiplied familial investments by at least 100x maybe?) billionaires come from upper middle class families.
I wonder if they are actually more likely to come from upper middle class (where parents are highly paid professionals) than the proper idle rich or even CEOs and company founders...
If you gave the $300,000 Bezos got from his father to 10,000 random Americans in 1994, none of them would have created a company the equivalent of Amazon's scale.
How many of those 10k would have the same background? We can pretend Bezos' dad raised him in a "normal" middle class background then randomly dropped 300K on him, or we can acknowledge he is the business equivalent of an Olympic athlete.
Miguel worked at Exxon for 32 years as an engineer and a manager. It's not like he was the CEO or anything close to that. There would literally be hundreds of thousands of people in a similar position to him across the world.
Also worth noting that Jeff Bezos was(and I think still is) the youngest person who ever became a senior VP at DE Shaw. That is a position earned by merit alone.
I think you are agreeing with the poster you are responding to, right? Bezos is the equivalent of an Olympic athlete: a combination of innate talent as well as opportunity.
So $300K in 1994 is about $640K. That's nice but about 80th percentile of net worth. It's nice his parents believed in him. How many of your parents would do that for you? I'm sure at 1 in 5 of them have that kind of money because of the distribution here. So the difference here is He was smart, he got lucky, and your parents don't believe in you enough on this front.
But compare and contrast Bezos and Musk. Bezos's mid-life crisis is leaving his wife to run around on his yacht banging models. Musk's mid-life crisis is trying to destroy democracy so he and his mom won't have to pay US taxes. Neither one is a role model, but I don't even get the point of the latter.
Which brings us back to AlphaFold. The AlphaFold team did something amazing. But also, they had a backer that believed in them. David Baker, for better or worse, didn't achieve what they did and he'd been at it for decades. It's amazing what good backing can achieve.
That's one metric, that only reflects Amazon's function as an income generator.
I view businesses through other metrics as well, including their impact on society in a variety of different ways. From some of those perspectives, it is not clear to me that Amazon (where I was the 2nd employee) is a net benefit.
This Amazon the company specifically or online shopping in general? E.g. if Amazon hadn't been made and some other online retailer had dominated (or even if there had been many!)
That may be true, but I don't think that's really the crux of the argument. This article talks about how Amazon was initially funded by Bezos' family members: https://luxurylaunches.com/celebrities/jeff-bezos-parents-in.... The bigger point is that relatively very few parents (like a couple percent maybe?) would be in a position to give their kid $250k to start a new venture, and it's not that surprising that the most financially successful people in the world needed both: intrinsic talent and drive, and a huge amount of support from their birth circumstances.
The way I like to put it is that both of the following are true:
1. Bezos is uniquely talented and driven, and his success depended on that
2. Bezos' success also depended on him having an uncommon level of access to capital at a young age.
The reason I like to say "both of these are true" is that so often today I see "sides" that try to argue that only one is true, e.g. libertarian-leaning folks (especially in the US) arguing that everything is a pure meritocracy, and on the other side that these phenomenally successful people just inherited their situation (e.g. "Elon Musk is only successful because his dad owned an emerald mine")
hypothesis : it's not per se affluence. it's the culture of the family and social circle. A dollop of $ to have some free time and maybe buy some books would help and might be necessary.
imagine a family where youngster is encouraged to work on intellectual problems. where you aren't made fun of for touching nerdy things. or for doing puzzles. where the social circle endorses learning. these things more important than $ in a first world economy. (if third world, yes give me some money please for a book or even just food. and hopefully with time, an internet connected device then the cream will rise they can just watch feynman on YouTube...)
that said, it's "better" than it used to be. hundreds of years ago most interesting science, etc. was done by the royal class. not because they are smarter (I assume). But they had free time. And, social encouragement perhaps too.
bill gates and zuck dropped out of Harvard right? it's not per se Harvard, at least not the graduating bit? being surrounded by other smart people is helpful -- and or people who encourage intellectual endeavors.
> hundreds of years ago most interesting science, etc. was done by the royal class
Not really true. Newton, Copernicus, Kepler, Galileo, Mendel, Faraday, Tesla... Not from royals, nor from high nobility. Many great scientists were born to merchant families, of a level that wasn't even all too rare.
The mental exercise is to compare two identical Jeff Bezos' (identify his attributes), one has the background/funds they did, one doesn't.
Of course, that's not possible, so then you do the same with other highly intelligent and skilled tech professionals. I'd argue that without the funding and other resources, those skilled pro's won't get anywhere. But with it, some would do incredibly well. It's not common in a global sense, but we see it every single day.
Comparing Bezos to thousands/millions of randomised others is pointless.
Then you may say, oh but Amazon is unique. Yes, but then there are other factors at play. Like the luck (skill? funding?) to take advantage of a unique moment in time at the start of the web. That moment isn't available eveI mean, try to start an Amazon today ... etc
I think the question is not whether Bezos, or Gates, were helped by a reasonably wealthy family. The question is- was that wealth helpful because it allowed them to fully develop their own potential; or they have no merit at all and anybody with that wealth would have done the same?
I think those who point out the privileged start of these entrepreneurs are suggesting the second, and yet that makes no sense (millions had the same privilege and didn't get anywhere close).
Disagree. I'm not suggesting the person in question does not have potential or merit and that anyone can do it. It's just that the wealth of a family can pour rocket fuel on that person to enable them to reach their potential.
Yes, that's debatable in each case, everyone has a different story. But it's very likely.
You do get the counterfactual, the plucky upstart who came from nothing. But I'd wager big that that's much rarer and more difficult.
There’s another side to this: if you accept the idea of “nature” — genes capable of carrying “talent” (in some sense) — it should be common for children of talented people to be talented.
Of course, talent doesn’t always mean prosperity. But in a society modeled on meritocracy, it often will.
Didn't know he worked on Black & White. Black & White was really ahead of it's time for 2001, it did a much better job of having NPC simulations in groups based on how you played as a god.
Yep. I distinctly remember reading an interview in the German GameStar magazine in '99 or something with him where he talks about his early work with Bullfrog. Over the years I read his name from time to time as he moved towards research. Pretty amazing career.
While I am skeptical about yesterdays award in physics, these are totally deserved and spot on. There are few approaches that will accelerate the field of drug development and chemistry as a whole in a way that the works of these three people will. Congratulations!
> There are few approaches that will accelerate the field of drug development and chemistry as a whole in a way that the works of these three people will.
As the author of one such approach, I'm skeptical.
AlphaFold 2 just predicts protein structures. The thing about proteins is that they are often related to each other. If you are trying to predict the structure of a naturally occurring protein, chances are that there are related ones in the dataset of known 3D structures. This makes it much easier for ML. You are (roughly speaking) training on the test set.
However, for drug design, which is what AlphaFold 3 targets, you need to do well on actually novel inputs. It's a completely different use case.
Protein structures are similar to each other because of evolution (protein families exist because of shared ancestry of protein coding genes). It's not a weird coincidence that helps ML; it's inherent in the problem. Same with drug design -- very, very, few drugs are "novel" as opposed to being analogues of something naturally in the body.
They're referring to the structure of the protein when a drug is bound, that's what's novel. Novel as in, you can't think of it as "just" interpolation between known structures of evolutionarily related proteins.
That said I'm not sure that's entirely fair, since Alphafold does, as far as I know, work for predicting structures that are far away from structures that have previously been measured.
You're quite wrong about small molecule drug structures. Historically that has been the case but these days many lead structures are made by combinatorial chemistry and are not derived from natural products.
But even drugs made by combinatorial chemistry still generally end up being analogues of natural products even if they aren't derived from them. As Leslie Orgel said "Evolution is cleverer than you are"; chemists are unlikely to discover a mechanism of action that millions of years of evolution hasn't already found.
I... Don't think that's right? Although I would appreciate being corrected with some good sources on this. It's a fast moving field and combinatorial chemistry is still new enough that many recently published structures wouldn't have used it.
I'm well aware of the impact of natural products and particularly plant secondary metabolites in drug discovery. I'm also aware of combinatorial synthesis occasionally hitting structures that are close to natural products.
But from first principles, why would you need to limit yourself to that subset of molecular space?
Obviously, your structure will need to look vaguely biochemical to be compatible with the bodies chemical environment, but natural products are limited to biochemically feasible syntheses, and are therefore dominated by structures derived from natural amino acids and similar basic biochemical building blocks.
For a concrete example off the top of my head, I'm not aware of any natural diazepines - the structure looks "organic" but biochemistry doesn't often make 7-rings, and those were made long before combinatorial chemistry. Might be wrong on this one, since there's so much out there, but I think it holds.
Perhaps we are using "structure" in different senses. Yes, it is possible to generate a molecule with a chemical structure unlike any biological molecule and have it bind to a protein, but it can only do so if its 3D structure is analogous to what naturally binds there. Natural products are a source of drugs because evolution has already done this work for us.
Yes, the chemical structures can look very different when drawn in the 2D manner, but that's why 2D structures aren't very useful for understanding binding, much as primary sequences of proteins aren't that useful. Morphine and fentanyl bind to μ-opioid receptors, just like what naturally binds there (endorphins and enkephalin). But if they are binding to the same receptor, they have to have similar structures in the biologically meaningful 3D sense (at least where they bind).
It appears that you were merely saying that ligands must adopt a 3D conformation that's complementary to the receptor. Sure. That's the entire premise of molecular docking software.
But there can be very dissimilar ligands (like morphine and fentanyl) binding the same receptors. A major goal of drug discovery is to find such novel binders, not to regurgitate known ones.
> It's not a weird coincidence that helps ML; it's inherent in the problem.
This depends on the application. If you are trying to design new proteins for something, unconstrained by evolution, you may want a method that does well on novel inputs.
> Same with drug design
Not by a long shot. There are maybe on the order of 10,000 known 3D protein-ligand structures. Meanwhile, when doing drug discovery, people scan drug libraries with millions to billions of molecules (using my software, oftentimes). These molecules will be very poorly represented in the training data.
I was just wondering when they were going to award the alphafold2 guys the nobel after after seeing Hinton win the physics one. 100% agree, all three of them totally deserve this one. Baker's lab is pretty much keeping Deepmind in check at this point and ensuring open source research is keeping up. Hats off
Baker has been in the protein folding game for a long time and was the leader before Alphafold came in... His generative paper came out what last year (2023)?
As someone in the drug discovery business I’m skeptical as I’ve seen many such “advances” flop.
I remember when computer aided drug design first came out (and several “quantum jumps” along the way). While useful they failed often at the most important cases.
New drugs tend to be developed in spaces we know very little about. Thus there is nothing useful for AI to be trained on.
Nothing quite like hearing from the computational scientist “if you make this one change it will improve binding by 1000x”. Then spending 3 weeks making it to find out it actually binds worse.
They were not equal contributors to the seminal paper that got the prize.
From another post in this thread:
"These authors contributed equally: John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Demis Hassabis"
I think putting AlphaFold here was premature; it might not age well. AlphaFold is an impressive achievement but it simply has not "cracked the code for protein folding" - about 1/3rd of its predictions are too uncertain to be usable, it says nothing about dynamics, suffers from the same ML problems of failing on uncommon structures, and I was surprised to learn that many of its predictions are incorrect because it ignores topological constraints[1]. To be clear, these are constructive criticisms of AlphaFold in isolation, my grumpiness is directed at the Nobel committee. "Cracked the code for protein folding" is simply not true; it is an ML approach with high accuracy that suffers the same ML limitations of failing to generalize or failing to understand deeper principles like R^3 topology that cannot be gleaned stochastically.
More significantly: it has yet to be especially impactful in biochemistry research, nor has its results really been carefully audited. Maybe it will turn out to deserve the prize. But the committee needed to wait. I am concerned that they got spun by Google's PR campaign - or, considering yesterday's prize, Big Tech PR in general.
I think looking back five years from now, this will be viewed as another Kissinger/Obama but wrt STEM. Given far too prematurely under pressure to keep up with the Joneses/chase the hype.
I am not so confident or dismissive: the real problem is that testing millions of predictions (or any fairly bold scientific development like AlphaFols) takes time, and that time simply has not elapsed. Some of the criticisms I identified might be low-hanging fruit that in 5 years will be seen as minor corrections - but we're still discovering the things that need to be corrected. It is concerning that the prize announcement itself is either grossly overstated:
With its help, they have been able to predict the structure of virtually all the 200 million proteins that researchers have identified [the word 'virtually' is stretched into meaninglessness]
or vague, could have been done with other tools, and hardly Nobel-worthy:
Among a myriad of scientific applications, researchers can now better understand antibiotic resistance and create images of enzymes that can decompose plastic.
I am seriously wondering if they took Google / DeepMind press releases at face value.
Chew on this a little, I stripped out as much as possible, but I imagine it still will feel reflexively easy to dismiss. Partially because its hard to hear criticism, at least for me. Partially because a lot was stripped out: a lot has gone sideways to get us to this point, so this may sound minor.
The fact you have to reach for "I [wonder if the votes were based on] Google / DeepMind press releases [taken] at face value." should be a red blaring alarm.
It creates a new premise[1] that enables continued permission to seek confirmation bias.
I was once told you should check your premises when facing an unexpected conclusion, and to do that before creating new ones. I strive to.
[1] All Nobel Prize voters choose their support based on reading a press release at face value
I have the same views as you (although admittedly the Kissinger comparison didn't convey that, because we all know how that turned out). It's at best quite premature. At worst, should never have been given in hindsight. Will probably land somewhere in between.
Second point is spot on. I really, really hope they didn't just fall for what is frankly a bit of SV style press release meant to hype things. Similar work was done on crystal structures with some massive number reported. It's a vastly other thing than the implied meaning that they are now fully understood and able to be used in some way.
Yes, but the good has to be extraordinary. If there's logic to it, in these cases they are predicting the good will come much later. Which is an incredibly difficult prediction to make.
Entire fields are based upon the existence of crispr now, it demonstrated its impact. It has been 2? 3? years, people who were making papers anyway have implemented AlphaFold, it hasn't exactly spawned a new area.
I think it is definitely possible for ChatGPT to win the Nobel Prize in Literature.. maybe not this version or the next but eventually -- especially if it is by proxy aka as is the premise in "The Wife" (a good book/movie btw).
There's already precedent for anonymous creators, aka Banksy.
AlphaFold is a useful tool but it's unsatisfying from a physical chemistry perspective. It doesn't give much if any insight in to the mechanisms of folding, and is of very limited value in designing novel proteins with industrial applications, and in protein prediction for membrane-spanning proteins, extremophilic microbe proteins, etc.
Thus things like folding kinetics of transition states and intermediates, remain poorly understood through such statistical models, because they do not explicitly incorporate physical laws governing the protein system, such as electrostatic interactions, solvation effects, or entropy-driven conformational changes.
In particular, environmental effects are neglected - there's no modeling of the native solvated environment, where water molecules, ions, and temperature directly affect the protein’s conformational stability. This is critical when it comes to designing a novel protein with catalytic activity that's stable under conditions like high salt, high temperature etc.
As far as Nobel Prizes, it was already understood in the field two decades ago that no single person or small group was going to have an Einstein moment and 'solve protein folding', it's just too complicated. This award is questionable and the marketing effort involved by the relevant actors has been rather misleading - for one of the worst examples of this see:
Dunno. It's a tool to try to figure the shape of molecules in a similar way that a cryo-electron microscope is a tool for that. Not the end of science imho.
Conjecture, that. Even if true I think it will be very hard to find any definition of science along the lines of "training deep neural nets to do the understanding in our stead".
It was big blind luck that the laws of planetary motion turned out to be so simple. There's no reason to think that protein folding can similarly be reduced to some elegant description without needing large blackbox models.
Yesterday's physics win was rather odd but this I have no problem with!
Lol does this mean there's a chance the Transformer Authors win a Nobel in literature sometime? Certainly seems a lot more plausible than before yesterday.
The prize winners are ultimately selected by a group of mid-age to old professors. And to tell the truth (I work at a research institute in Stockholm), some of the old folks seem to have huge FOMO. They know that they cannot keep up themselves, they have no idea (and no way of finding out) who is actually good and who is just pretending, which leads to recruitment of an 'interesting' bunch of young group leaders. Some of them are surely good, but I know of at least one guy who holds presentations as if he invented AlphaFold himself, while having contributed one single paper of interest to the field. Large turn off for me.
I do not criticize them for having FOMO. But I have my doubts when it is the 60-year-olds that are the most enthusiastic about something new (as long as it is not a new ABBA album), given the number of grifters out there. And there would have been many others that also deserve a Nobel, those three could easily have waited another 20 years. If it really was those that had the highest impact the last year who won the prize, it (or rather "Medicine") should have gone to GLP-1/Semaglutide research.
Right but this is a paradigm shift. If anything the 60 year-olds dumped on AI. Statisticians dumped on AI. Cybernetics/engineers all dumped on it. Just like everyone dumped on mRNA vaccines. I do agree with you about GLP-1 though that's legit as is the HIV vaccine.
But still the way forward is computational thinking that is very clear.
Well... maybe. What I mean by novel fold would be something that isn't a product of evolution, because that's going to be incremental modifications to existing structures. That's why overfitting can be especially devious in this scenario.
Combine this with the Physics prize, I now have hope to receive a Nobel prize in any area.
Seriously, from now on, I won't mention Nobel prizes anywhere anymore.
It is well known around here that Baker does very, very little of the work. He is extremely good at putting his name on his students' work though (this is par for the course in academia)... and removing theirs (this is the bad part). At least he bribes them with lots of happy hours!
For those like myself who design proteins for a living, the open secret is that well before AlphaFold, it was pretty much possible to get a good-enough structure of any particular protein you really cared about (from say 2005) by other means, namely Baker’s Rosetta.
I constantly use AlphaFold structures today [1]. And AlphaFold is fantastic. But it only replaces one small step in solving any real-world problem involving proteins such as designing a safe, therapeutic protein binder to interrupt cancer-associated protein-protein interactions or designing an enzyme to degrade PFAS.
I think the primary achievement is that it gets protein structures in front of a lot more smart eyes, and for a lot more proteins. For “everyone else” who never needed to master computational protein structure prediction workflows before, they now have easy access to the rich, function-determinative structural information they need to understand and solve their problem.
The real tough problem in protein design is how to use these structure predictions to understand and ultimately create proteins we care about.
Forget Rosetta. Even installing that shit was hard, and running it on a sufficiently beefy machine was probably really not a thing in the late aughts. For protein design you mostly just need a quick and dirty estimate of what it looks like, and you have friend proteins that can be used to homology map, you could just use phyre/phyre2, which is an online threading model and be close enough to get work done. Upload the pdb, upload the sequence, bing bam boom.
So between this and the award for physics, it's basically a clean sweep of the Nobel prizes this year for AI. Quite a moment if you stand back and think about that.
So the Nobel commite was wrong to decide this because you think otherwise?
Interesting. Any more indepth analys about this?
Btw. you don't just build AlphaFold by doing only 'computers'. Take a look at any good docmentary about it and you will see that they do discuss chemistry on a deep level
Deepmind isn't a chemistry company. Demiss Hassabis isn't a chemist.
A tool they developed in their area may turn out to be useful in Chemistry. They may spend some relatively short time and effort to apply their tool to Chemistry. They can do the same thing in many areas in every few years and collect all the Nobel's in many areas. That effort is worth for a prize but the context is different.
It is possible that some committee members might have raised this same concern in their discussions.
relatively short : in comparison to real chemists, whose work is the basis for this development.
This is my first interaction in Hackernews, and I was expecting a more polite discussion. I just expressed my idea. You could ask for my explanations.
You didn't start a discussion with a good argument from begin with though.
And i personally really think if people from a different field, jump into a new field and revolutionize it, a nobel price is not a bad thing to appreciate this effort.
I see a number of comments here about giving awards to organizations rather than individuals, and counter-comments pointing out that Nobel's will disallowed it.
How is the Nobel Prize actually administered? For how long is the Nobel committee bound to follow Alfred Nobel's will? And aren't there laws against perpetual trusts? Or is the rule against awarding the technical awards to organizations one that the committee maintains out of deference to Nobel's original intentions?
As a computer scientist who is oppositional to AGI boom-bubble mania, it was easy to decry the Nobel in physics. But, contextually given who Murray Gell-Mann was and what field he was in (astrophysics) I feel a very strong Gell-Mann Effect here because I am happy to accept THIS use of computational systems to advance (bio)chemistry is worthy, and I find myself wondering why I am so uncritical about it?
Feeling a bit down today, so just asking: when can we realistically expect to see the (positive) effects of this Nobel in daily life, and what would they be ? (I understand it's helping biotech a lot, but helping them do... what exactly ?)
The drug development process takes around 10+ years typically, a lot of long planning of multiple phases of studies needs to be done. This will help in the initial steps and in finding good starting points, and in theory should help the subsequent stages be more successful. I wouldn’t expect new drugs this decade.
Other aspects of biotech and research could well be affected far faster than the consumer drug market, but again you’ll need a few years for those early stage developments to aid real world applications.
And, at the heart of AlphaFold2 is the language model, the tip of the spear in AI today. 'Language' can come in many forms e.g. a protein or amino acid sequence.
AlphaFold 2 wasn't Q-learning based. It was supervised SGD and the "evoformer" they introduced is very close to a transformer. So it's not exactly an LLM, but it's a pretty close equivalent for protein data.
Nobel peace prize has countless times been awarded to a group of people or institution. It is differently controlled but the idea is not unprecedented.
Surely, it is helpful to consider the achievement in terms of the contest setup to detect a Nobel-worthy breakthrough: https://en.m.wikipedia.org/wiki/CASP
It moved the needle so much in terms of baseline capability. Let alone Nobel’s original request: positive impact to humanity; well deserved.
In biology/medicine it is still awed like coming from a different planet; tech before was obviously that lacking.
AlphaFold is also a high impact discovery, while Hopfield networks have very little to do with modern AI and they are only a very interesting toy model right now.
The AlphaFold paper has countless authors, many researchers and company resources underlying it. Hassabis’ contribution is management of resources and entrepreneurship, not the actual science. There are hundreds of thousands of scientists out there doing deep technical work, and they aren’t recognized.
I think we might be the end of it, as the emphasis shifts to commercialization and product development.
These AI demonstrations require so many GPUs, specialized hardware and data that nobody has but the biggest players. Moreover, they are engineering work, not really scientistic (putting together a lot of hacks and tweaks). Meanwhile, the person who led the transformer paper (a key ingredient in LLMs) hasn’t been recognized.
This will incentivize scientists to focus on management of other researchers who will manage other researchers who will produce the technical inventions. The same issue arises with citations and indices, and the general reward structure in academia.
The signal these AI events convey to me: You better focus on practical stuff, and you better move on in the management ladder.
The Nobel prize cannot go to a team so they have to pick individuals. This is true for many (most?) nobel prize awards. Consider for example the discovery of gravity waves - the team that built and operated LIGO was huge, but they have to pick. This has commonly been the case since the inception of the prizes - the professor gets the prize, the PhD students and postdocs don't usually. Not saying this is right, but it's the way it is.
For gravitational waves discovery, Nobel prize went to the designers of the LIGO which was done long before we actually built it. The example that will fit more your idea would be Carlo Rubbia who got the award in 1984 for leading the CERN team who discovered the W and Z bosons. He did not have any contributions than leading the experiment that did it [1]. It is not like he designed or proposed the way we used to detect them. And the Nobel prize for higgs discovery went to theorists who proposed and predict it not the experimental physicists (thousands) who discovered it in 2012.
So can we expect that Sam Altman will be honored with Nobel prize 2025? After all physics prize went to AI researchers this year, and chemistry prize went to an organizational head.
The Nobel prize's prestige comes from its history, not from the size of the monetary award.
For an example, the Millennium Technology Prize is awarded every two years and the prize money is slightly higher than the Nobel prize (1M EUR vs 0.94M EUR). The achievements it's been awarded for tend to be much more practical, immediate and understandable than the Nobel prize achievements. The next one should be awarded in a couple of weeks.
And when that happens, it'll get 1/10th the publicity a Nobel prize gets, because the Nobel prize is older than any living human and has been accumulating prestige all that time, while the Millennium prize is only 20 years old.
You're really conflating things. Altman is no Hassabis.
Just because there is a ton of hype from OpenAI doesn't detract from what DeepMind has done. AlphaGo anybody?
Are we really already forgetting what a monumental problem protein folding was, decades of research, and AlphaFold came in and revolutionized it overnight.
We are pretty jaded these days when miracles are happening all the time and people are like "yeah, but he's just a manager 'now', what have they done for me in the last few days".
I am missing context here and would love to know more.
Say I know about ATP Synthase and how the proteins/molecules involved there interact to make a sort of motor.
How does AlphaFold help us understand that or more complicated systems?
Are proteins quite often dispersed and unique, finding each other to interact with? Or like ATP Synthase are they more of a specific blueprint which tends to arrange in the same way but in different forms?
In other words:
Situation 1) Are there many ATP synthase type situations we find too complex to understand - regular patterns and regular co-occurences of proteins but we don't understand them?
Situation 2) Or is most of the use of Protein situational and one-off? We see proteins only once or twice, very complicated ones, yet they do useful things?
I struggle to situate the problem of Unknown proteins without knowing which of the above two is true (and why?)
The Nobel prize isn't awarded for a paper. Even if (and that's a large if) all of these contributed equally to the results in the paper, some obviously did more than others to prepare the ground for that study.
I did raise an eyebrow at it too, but I doubt his contribution was entirely “management of resources”.
I think one must also give him the credit for the vision, risk taking and drive to apply the resources at his disposal, and RL, to these particular problems.
Without that push this research would never have been done, but there may have been many fungible people willing to iron out the details (and, to be fair, contribute some important ideas along the way).
I’m not a proponent of the “great man” theory of history, but based on the above I can see that this could be fair (although I have no way of telling if internally this is actually how it played out).
Agree. Hassabis is more than a manager. He did start DeepMind with just a few people and was a big part of the brains behind it.
Now that it has grown he might be doing more management. But the groundwork that went into AlphaFold was built on all the earlier Alphaxxx things they have built, and he contributed.
It isn't like other big tech managers that just got some new thing dumped in their lap. He did start off building this.
> The AlphaFold paper has countless authors, many researchers and company resources underlying it. Hassabis’ contribution is management of resources and entrepreneurship, not the actual science.
That's usually how you get a Nobel prize in science. You become an accomplished scientist, and eventually you lead a big lab/department/project and with a massive massiv you work on projects where there are big discoveries. These discoveries aren't possible to attribute to individuals. If you look back through history and try to find how many "Boss professor leading massive team/project" vs. how many "Einstein type making big discovery in their own head" I think you'll find that the former is a lot more common.
> This will incentivize scientists to focus on management of other researchers who will manage other researchers who will produce the technical inventions.
I don't think the Nobel prize is a large driver of science. It's a celebration and a way to put a spotlight on something and someone. But I doubt many people choose careers or projects based on "this might get us the prize..."
> You become an accomplished scientist, and eventually you lead a big lab/department/project and with a massive massiv you work on projects where there are big discoveries.
That's a very recent thing. Up to the 90s, the Nobel committee refused to even recognize it. They just started to award those prizes at the 21 century, and on most fields they never became the majority.
> These AI demonstrations require so many GPUs, specialized hardware and data that nobody has but the biggest players. Moreover, they are engineering work, not really scientistic
The Nobel prize is aimed at the general public. It has a kind of late 19th century progressive humanistic ethos. It's science outreach. This way, at least once a year, the everyday layperson hears about scientific discoveries.
The Nobel isn't a vehicle to recognize hundreds of thousands of deeply technical scientific researchers. How could it be? They have to pick a symbolic figurehead to represent a breakthrough.
They could also simply give it to "DeepMind" similar to how they give the peace prize to orgs sometimes, or how the Time Person of the Year is sometimes something abstract as well (like the cutesy "You" of 2006). But it would be silly. Just deal with it, we can't "recognize" hundreds of thousands, and we want to see a personal face, not a logo of a company getting the award. That's how we are, better learn to deal with it.
> The Nobel prize is aimed at the general public...
Which is okay. The Nobel prize is okay.
> This way, at least once a year, the everyday layperson hears about scientific discoveries.
Spot on.
The problem we have is that the everyday layperson hears very little about scientific discoveries. The scientists themselves, one in a million of them, can get a Nobel prize. The rest, if they are lucky, get a somewhat okay salary. Sometimes better than that of a software engineer. Almost always worse working hours.
But I suppose it's all for the best. Imagine a world where a good scientist, one that knows everything about biology and protein folding, gets to avoid cancer and even aging, while the everyday layperson can only go to the doctor...
"These authors contributed equally: John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Demis Hassabis"
Guess how many of them were included in the prize. It's a shame that the Nobel committee shies away from awarding it to institutions, but the AlphaFold prize doesn't even make the top 10 in a list of most controversial omissions from a Nobel prize. It's a simple case of lab director gets the most credit.
Can they? I mean, in the sense that you can yolo anything, sure, but the prizes were designed in a time when it was (more) reasonable to award them to individuals, and they are defined in a will. There may not be a mechanism for updating the standards.
Yes, they can. In 1901, science was not nearly as collaborative as it is today. Especially considering the need for a Nobel Prize to be experimental and the fact that most major labs today _need_ dozens of people.
He asked _can_ not _should_: what is the legal mechanism for doing so? Personally I don't doubt there is one but I don't think you know it off the top of your head, so I don't see it as fair to disparage OP for not knowing either.
Well, it seems like amendments have been made before. [0]
> Before the board ... votes on a proposal to amend the statutes ... with the first paragraph, the prize-awarding bodies shall examine the proposal.
Well, it seems like amendments have been made before. [0] > Before the board ... votes on a proposal to amend the statutes ... with the first paragraph, the prize-awarding bodies shall examine the proposal.
The nature of scientific work has changed significantly since 1895, when the Nobel Prizes were established. 100 years ago, lots of scientific work really were driven forward largely by a singular person. That's rarely true today for groundbreaking research. I don't know if this means the Nobel needs to change or we need a another prize that reflects the collaborative work of modern science.
> The nature of scientific work has changed significantly since 1895, when the Nobel Prizes were established. 100 years ago, lots of scientific work really were driven forward largely by a singular person. That's rarely true today for groundbreaking research.
The question is: is this a necessity for doing good science today, or rather an artifact of how important research is organized today (i.e. an artifact of the bureacratic and organizational structure that you have to "accept"/"tolerate" if you want to do want to have a career in science)?
So I did a mini research project on Claude to answer your question. From 1900-1930, 87% of Nobel Prizes in Physics, Chemistry and Physiology/Medicine were awarded to individual contributions, 13% were awarded to collaborative contributions.
This ratio has flipped in the past 30 years, from 1994-2023, where 17% prizes were individual, 83% collaborative.
So I'd say yes, collaborative work is increasingly a requirement to do groundbreaking research today. The organizational structures and funding are a part of the reason as you mention. But it's also that modern scientific problems are more complex. I used to have a professor that used to say about biology "the easy problems have been solved". While I think that's dismissive to some of the ingenious experiments done in the past, there's some truth to it.
This begs the question. If all science is now structured as big research teams, we'd expect the breakthroughs to come from such teams. That doesn’t necessarily imply that teams are needed.
J.J. and D.H. led the research. J.J., R.E., A. Pritzel, M.F., O.R., R.B., A. Potapenko, S.A.A.K., B.R.-P., J.A., M.P., T. Berghammer and O.V. developed the neural network architecture and training. T.G., A.Ž., K.T., R.B., A.B., R.E., A.J.B., A.C., S.N., R.J., D.R., M.Z. and S.B. developed the data, analytics and inference systems. D.H., K.K., P.K., C.M. and E.C. managed the research. T.G. led the technical platform. P.K., A.W.S., K.K., O.V., D.S., S.P. and T. Back contributed technical advice and ideas. M.S. created the BFD genomics database and provided technical assistance on HHBlits. D.H., R.E., A.W.S. and K.K. conceived the AlphaFold project. J.J., R.E. and A.W.S. conceived the end-to-end approach. J.J., A. Pritzel, O.R., A. Potapenko, R.E., M.F., T.G., K.T., C.M. and D.H. wrote the paper.”
Not all of the work. For example, it doesn't account for the fact that Demis Hassabis, as head of DeepMind, undoubtedly recruited many of the co-authors to participate in this effort, which is worth something when it comes to the final output.
Great achievement, although I think it's interesting that this Nobel prize was awarded so early, with "the greatest benefit on mankind" still outstanding. Are there already any clinically approved drugs based on AI out there I might have missed?
In comparison, the one for lithium batteries was awarded in 2019, over 30 years after the original research, when probably more than half of the world's population already used them on a daily basis.
Arguably awarding early is more in line with the intention expressed in Nobel's will: "to those who, during the preceding year, have conferred the greatest benefit to humankind". It seems to have drifted into "who did something decades ago that we're now confident enough in the global significance of to award a prize". I suspect that if the work the prize recognized reslly had to have been carried out in the preceding year the recipients would be rather different.
Given that drugs take around 10 years to get to market, and that some time is needed for industrial adoption as well, it's not very reasonable to expect clinically approved drugs before a few years.
This is really sad. A new recipe for feeding honeybees to make tastier honey could get to market in perhaps a month or two. All the chemical reactions happening in the bees gut and all the chemicals in the resulting honey are unknown, yet within a matter of weeks its being eaten.
Yet if we find a new way of combining chemicals to cure cancer, it takes a decade before most can benefit.
I feel like we don't balance our risks vs rewards well.
I think the idea is that we're, as a species, much more comfortable with the idea that 15 years down the line that 50% of treated colonies collapse in a way directly attributable to the treatment than we are with the idea that 15 years down the line 50% of treated humans die in a way directly attributable to the treatment.
Now if the human alternative to treatment is to die anyway than i think that balance shifts. I do think we should be somewhat liberal with experimental treatments for patients in dire need, but you have to also understand that experimental treatments can just be really expensive which limits either the people who can afford it, or if it's given for free, the amount the researcher can make/perform/provide.
10 years is a very long time. I've had close family members die of cancer and any opportunity for treatment (read: hope) is good in my opinion. But i wouldn't say there's no reason that it takes so long
The Nobel Prize has always prioritised advances in the field over specific training.
Curie was a trained chemist when she won her prize in physics. Michelson was a Naval Officer. Of course naturally, being able to win a Nobel usually means you studied the field your entire life but that has never been a requirement.
First Obama got the Peace Prize Nobel, now Demis Hassabis gets the Chemistry Nobel. I expect at a minimum the Nobel Prize in Literature to be Donald E. Knuth.
Yes eventually GPT itself may be capable of winning a Nobel off its own writing but before then..the authors of the Transformer might win one ? Certainly seems a lot more plausible now.
I wonder why various outlets, including DeepMind's blog, say that John Jumper is a "Senior Research Scientist". That's L5 which sounds like quite a low rank for a Nobel prize winner. I checked his LinkedIn and he's a director, which is around L8. I thought that maybe he was L5 during the publishing of the results, but no, he was either L6 or L7.
Maybe "Senior Research Scientist" sounds more respectable among the intended audience. A research scientist is usually an independent researcher rather than someone working in another person's team, while "senior" indicates that they have been in an independent role for a while. A director, on the other hand, is someone who failed to avoid administrative duties, and it doesn't imply any degree of seniority.
L5 doesn't mean anything to anyone outside of whatever organization you're talking about (Google?). A Senior Research Scientist means "a person who is a scientist, works in research, and is very experienced in that role". Even if this is not the title he holds in his organization, it is an objective title that applies to him.
I’m in biotech academia and it has changed things already. Yes the protein folding problem isn’t “solved” but no problem in biology ever is. Comparing to previous bio/chem Nobel winners like Crispr, touch receptors, quantum dots, click chemistry, I do think AlphaFold already has reached sufficient level of impact.