Hacker News new | past | comments | ask | show | jobs | submit login
Prostate cancer includes two different evotypes (ox.ac.uk)
152 points by panabee 6 months ago | hide | past | favorite | 84 comments



I tried to dissect this paper on my recent 6h flight. Despite a decade of research in prostate cancer genomics I could not figure out what is going on. the results are both vague and at odds with established knowledge. the computational approaches complex and impenetrable. I sent it to my students as an example of computational over cooking.


I am going to steal that name. I find that computational overcooking happens more and more because the easy questions that can be asked with sequencing datasets is starting to dry up.


> computational overcooking

The two words make perfect sense together. Reminds me of the "deep-fried memes" subculture.


Ok very quick notes

• Prostate cancers are known to have a wide spectrum of outcomes.

• Stage IV (metastatic) disease tends to have genetic testing. These panels tend to be on the order of a few hundred genes to 1000s of genes.

• Classically prostate cancer is driven by androgen receptor upregulation. Disease progression is often due to the disease overcoming treatment with antiandrogenic such as enzalutamide.

Correction: enzalutamide was designed to overcome castrate resistant prostate cancer. abiraterone would have been more appropriate to bring up here.

• Upon review of NCCN guidelines: there are two main genetic indicators for targeted therapies. Both of these mutations are indicated for germline and somatic contexts: BRCA1/2 for parp inhibition and dMMR/MSI-H for pembrolizumab

o Note that there are some somatic mutations with HRD pathway that are indicated for treatment. But that is only if they are somatic

• This study aims to figure out the etiology of the disease in an evolutionary manner. That is what are the key events that lead to oncogenesis.

edit note: the word that escaped me was epistasic given that we are looking into the nuts and bolts cause and effects of different mutations.

edit note 2: I'm going to be honest, most of the time I've read about prostate cancer is in the metastatic setting and thus it has already become castrate resistant. Abiraterone is also meant to aid in sensitizing castrate resistant prostate cancer. Let's just say androgen deprivation therapy for now. On the other hand, I hope this was instructive in showing how important AR is as a pathway for prostate cancer

• Quick thought: this could be useful if this matches with molecular screening in earlier stage disease. If we can reliable map out which chain of events (tumor suppressor loss of function mutations/ gain of function mutations for oncogenes, chromosomal level mutations) lead to more aggressive disease, we can inform changes in surveillance and earlier/ more aggressive treatment.

• Granted this isn’t too out there, tissues cores are taken out to begin with to get initial snapshot into how aggressive disease (it’s how you get the Gleason score after all).

• Regarding switching pathways, that’s not too crazy given neuroendocrine transformations exist in prostate + lung cancer


Isn't it the case that there are a near infinite number of forms of cancer for any organ? Any combination of mutations that causes unrestricted growth right? So how can it just be 2 forms?


If you weld 2 sticks of metal together end to end, then bash them on the ground, they're likely to break at the weld. Insert additional physical examples here.

Likewise DNA is likely to break (and fail to be corrected) at particular points. Some of those points cause cancer, but there's only a finite set of those points since DNA is largely identical across all humans. Additionally, even if the failure is slightly to one side of the expected break, it usually shows the same symptom.


> If you weld 2 sticks of metal together end to end, then bash them on the ground, they're likely to break at the weld

This doesn't sound even remotely true to me, except in the case of an absolute beginner doing the welding


In principle, yes, in practice, no; real-world mutations are (more often than not) non-random and their frequencies can be affected by a variety of factors. For example, the location of the mutated gene or region within the bundled chromatin structure inside the cell nucleus (this structure is highly conserved into what are known as topologically associated domains, or TADs), or the interaction between a region of DNA and cellular machinery that increases the likelihood of some mutation. There are tons of examples.

In practice, we've now molecularly characterized most well-studied cancers and know that they tend to have the same mutations. For example, certain DNMT3A mutations are very common in AML and the BCR-ABL fusion protein in CML (and results from an interaction between chromosomes 9 and 22 that produces the mutant 'Philadelphia chromosome'). There are even a wide range of cancers that share similar patterns of mutations and fall under the umbrella of 'RAS-opathies', which all exhibit some kind of mutation in a subset of genes on a specific pathway related to cell differentiation and growth. Examples include certain subtypes of colon cancer, lung cancer, melanoma, among many others.

More generally, when a cancer is subtyped, that subtyping is always done with respect to some quantifiable biological trait or clinical endpoint and – as you've hinted – that subtyping is commonly a statistical assessment. Each cancer is unique and, even within an individual cancer, we have clonal subpopulations – groups of cells with differing mutations, characteristics, and behaviors. That's one of the reasons treating cancer can be so challenging; even if we eliminate one clonal population entirely, another resistant group may take its place. The implication is that cancers that emerge with post-treatment relapse are often 1. more or completely resistant to the original therapy, and 2. exhibit different behaviors and resistance, often to the detriment of the patient's outcome.


You could theoretically subclassify cancers all the way down to individual gene mutations, and even then there is heterogeny within one cancer itself. That is the idea behind "personalized medicine" though it's being distorted by hype.

However, medicine is practical, and tries to draw boundaries where different therapies help differently or where there are different pathophysiology going on.

The article was able to draw a new additional boundary. Its relevance is yet to be confirmed with its phenotype or druggability. If it turns out to be useful either in predicting therapy or outcome, it'll stick and oncologists will learn it.

This process has already been repeated a lot in the hematological cancers, where previously cancers like "Hodgkin's lymphoma" have been subdivided as we made new treatments and discovered the individual pathways.


There’re redundant pathways, but it’s not a million. I guess different mutations can converge to a common phenotype. And in any case, what you present in a study is kind of like a model that you derive from the data. You can probably go deeper, but you’d need more patients/more resources.


I think there is a difference in the infinite such as the example of monkeys typing Shakespeare, and the most vulnerable and likely areas to cause cancer which are what need to be focused on. In this specific, particular, and very likely for most people case it seems settle into 2 subtypes.


</pull paper>

> Despite the reduced dimensionality of the feature representation, application of standard clustering methods remains problematic due to the high dimension of features (30) relative to the sample size (159). To mitigate this,

and I'm done.


There really isn't a lot going on here. They're trying to simplify the model, but running into issues with the small sample size.

Many papers tend to use overly dense speech, and a lot of it seems to be self-congratulatory on the part of the authors. On the one hand, they're addressing their small audience at the forefront of their particular field. But on the other, they do appear to take pride in their prowess with academic gesticulating.

It's the exact opposite of Bezos' recommendation to write simple and to try not to sound smart [1]. There's a way to be precise without it being impenetrable. Unfortunately, those writing the primary literature sometimes enjoy their special cathedrals. And at the end of the day, they're not writing this stuff for the layperson. It's a shame, though, especially with regard to publicly-funded research.

[1] https://medium.com/@writerbites/how-to-write-like-jeff-bezos...



Gleason scores already designate 5 different types/categories of PaC, and this score strongly influences what type of barbaric treatment one will receive.

I hope their findings help discovery of humane immunotherapy’s.


I was just diagnosed with 7/7.5 Gleason score prostate cancer (adenocarcinoma).

The barbarians are cutting it out in 4 weeks.

A life of pissing my pants beats dying before retirement.


I understand and agree. I was Gleason 7: 4+3 intermediate aggressive. Caught it very early, no spread, so RALP was performed 5 months ago. Now I’m deep in the throws of pelvic floor PT hoping to gain some semblance of control again.

Hopefully your surgeon has performed (unlike mine) more than 1000 RALPs - the incontinence outcomes improve greatly when their experience is that high.

Best of luck to you. DM me at gmail if you need/want to talk about your upcoming RALP.


I will do that. I am in a similar diagnosis.

I know someone who had the surgery 13 years ago and it went very well. But 5 months on for you, gives me the uneasies.


My dad had it 14 years ago. It’s been in remission and to my knowledge there’s no accidents and I don’t think he had many issues from the start.


There's a very good chance, 90% or higher, if you're under 70, not obese, and in general good health, that you will have no issues with impotence or incontinence after reovery from surgery.

This is really the best option for healthy fit people under 70.


90% or higher? No way, there’s not a credible large scale survey that makes that claim. Closer to 15-25% suffer from incontinence permanently. And 30-50% will have ED. And 100% of outcomes will loose inches.

50% of RALP patients will have biochemical reoccurrence within 8 years. It takes 15 years of low PSA (<.1ng/ml) to state you are cured.

The goal is to die with PaC, not because of it.

All of these stats are available at PCRI.


See: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9917389/

> Several studies report a progressive return of continence up to one year after RP, with a continence rate ranging from 68 to 97% at 12 months [13,14,15,16,17], while a progressive further improvement could be registered up to 2 years [18].

Note that I qualified it with under 70, good health otherwise, and non-obese or overweight. If you're in that category, you are extremely likely not to have continence problems after a year.


> The goal is to die with PaC, not because of it.

The reason many people die "with" it is because 67% of Americans are obese or overweight, and it usually dosn't show up 'til a person is 70. Of course these people die of something else first.

If a person isn't fat, and has no other health problems, getting it removed if it hasn't spread beyond the prostate is the best choice.


My father was diagnosed with a Gleason score of 10 this year.

When it’s this bad, ironically some barbaric options are off the table.


Kudos for this identification, but at this point, calling the use of neural networks in statistical work "AI" is misleading at best. I know it won't stop because it gets attention to claim "AI", but it's really depressing. Ultimately it's not really any different than all the talk about "mechanical brains" in the 50s, but it's just really tiresome.


John McCarthy coined the term AI in 1955 for a research project that included NN. He then founded the MIT AI Project (later AI Lab) with one of the researchers who joined the project, Marvin Minsky, who also had created the first NN in 1951.

If NNs aren't AI, what is?

http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf


It implies agency on the part of the software that doesn't exist. It should really just say "researchers ran some math calculations and found X." The fact that the math runs on a computer and involves the use of functions that were found by heuristic-guided search using a function estimator, instead of human scientists finding them from first principles, is surely relevant in some way, but this has been the case for at least a century since Einstein's field laws required the use of numerical approximations of PDE solutions to compute gravity at a point, probably longer.

I don't want to say there is no qualitative difference between what the PDE solvers of 1910 could do and what a GPT can do, but until we don't need scientists running the software at all and it can do decide to do this all on its own and know what to do and how to interpret it, it feels misleading to use terminology like "AI" that in the public consciousness has always meant full autonomy. It's going to make people think someone just told a computer "hey, go do science" and it figured this out.


In PR materials, AI = computers were involved


That's a problem with the term now. Whenever there is a PR statement of using AI it does not have much meaning attached to it. Sometimes even simple algorithms are called AI, if there is a bit of statistics involved even more. I liked the term machine learning, because now with AI-everything I don't really know what it is about.


I think the average person can safely call "the use of neural networks in statistical work" AI.


It's technically correct, but AI has become such an overloaded term that it's impossible to know it refers to "the use of neural networks" without explicitly saying so. So you know, maybe just say that?


This debate is a classic. AI has always been an overloaded term and more of a marketing signifier than anything else.

The rule of thumb is, historically, "something is AI while it doesn't work". Originally, techniques like A* search were regarded as AI; they definitely wouldn't be now. Information retrieval, similarly. "Machine learning", as a brand, was an effort to get statistical techniques (like neural networks, though at the time it was more "linear regression and random forests") out from under the AI stigma; AI was "the thing that doesn't work".

But we're culturally optimistic about AI's prospects again, so all the machine learning work is merrily being rebranded as AI. The wheel will turn again, eventually.


... and once it works, it "earns" a name of its own, at least among people actually doing it. Even in 2024 there are Machine Learning Conferences of note.


ICLR, ICML, KDD, ICCV, ECCV and NeurIPS are all major "AI" conferences and none of them has AI in the name. NeurIPS comes closest. "AI" has historically not been a useful distinction for anyone in the field.

I actually think this is changing given the current rapid ascent of multimodal models.


> calling the use of neural networks in statistical work "AI" is misleading at best.

Neural Networks are not considered AI anymore?

That just reinforces my thesis that "AI" is an ever sliding window that means "something we don't yet have". Voice recognition used to be firmly in the "AI" camp and received grants from even the military. Now we have that on wrist watches (admittedly with some computation offloaded) and nobody cares. Expert systems were once very much "AI".

LLMs will suffer the same treatment pretty soon. Just wait.


The current usage of AI is a rather new market term.

Where would you draw the line? Is prediction via linear regression AI?

Also language is fuzzy and fluid, get used to it.


It was called machine learning 10 years ago since AI had bad connotations, but today people have forgotten and call it all AI again.


Maybe it's a good indicator of misuse when the paper didn't mention 'AI', or 'intelligence' once.

> my thesis that "AI" is an ever sliding window that means "something we don't yet have

Or maybe it's the sliding window of "well, turns out this ain't it, there is more to intelligence than we wanted it to be".

If everything is intelligent, nothing is. If you define pattern recognition as intelligence, you'd be challenged to find unintelligent lifeforms, for example. You haven't learned to recognize faces, you are literally born with this ability. And well, life at least has agency. Is evolution itself intelligent? What about water slowly wearing down rock into canyons?


Pretty soon? I already regularly see people proudly stating that LLMs "aren't really AI" and just "a Markov chain" (yeah sure, let's ignore the self-attention mechanism of transformers which violate the Markov property).

For the sake of my sanity I've just started tuning out what anyone says about AI outside of specialist spaces and forums. I welcome educated disagreement from my positions, but I really can't take the antivaxx equivalent in machine learning anymore.


What if we hold off on calling it AI until it shows sign of intelligence


Chess was a major topic of AI research for decades because playing a good game of chess was seen as a sign of intelligence. Until computers started playing better than people and we decided it didn't count for some reason. It reminds me of the (real) quote by I.I. Rabi that got used in Nolan's movie when Rabi was frustrated with how the committee was minimizing the accomplishments of Oppenheimer: "We have an A-bomb! What more do you want, mermaids?"


They chased chess since they thought if they could solve chess then AGI would be close. They were wrong, so then they moved the goalpost to something more complicated thinking that new thing would lead to AGI. Repeat forever.

> we decided it didn't count for some reason

Optimists did move their goals once you realized that solving chess actually didn't lead anywhere, and then they blamed the pessimists for moving even though pessimists mostly stayed still throughout these AI hype waves. It is funny that optimists constantly are wrong and have to move their goal like that, yes, but people tend to point the finger at the wrong people here.

The AI winter came from AI optimists constantly moving the goalposts like that, constantly saying "we are almost there, the goal is just that next thing and we are basically done!". AI pessimists doesn't do that, all that came from the optimists that tried to get more funding.

And we see that exact same thing play out today, a lot of AI optimists clamoring for massive amounts of money because they are close to AGI, just like what we have seen in the past. Maybe they are right this time, but this time just like back then it is those optimists that are setting and moving the goal posts.


It turned out you can use some clever search algorithms rather than intelligence to play chess, yeah.


It also turns out that you can make machines that fly without having them flap their wings like flying animals. But it would be absurd to claim that airplanes don't fly for that reason.


Good thing I never claimed plans don't fly

I think you'll find that definition "intelligence" is a bit harder than defining "flight", and convincing people that "a machine programmed to mechanically follow the steps in the minimax algorithm as applied to chess, and do nothing else" doesn't fit most people's definition of "intelligence" in the context of the philosophical question of what constitutes intelligence.


And what signs of intelligence are we looking for this year?


Some sign that it's more than autocomplete would be nice, maybe the ability to perform some kind of logical reasoning. ChatGPT does a good job of putting up an illusion of human-like intelligence, but speak to it for like 10 minutes and its nature as a plausible text generator quickly makes itself apparent


Another entry in the 'marketing and technical terms don't mean the same thing despite using the same words' saga.


AI has always meant Artificial Intelligence. Intelligent and capable of learning, like a person.

LLMs are not AI.


> LLMs are not AI.

Neither are neural networks, by that definition. Or 'machine learning' in general. They all have been called "AI" at different points in time. Even expert systems – that are glorified IF statements – they were supposed to replace doctors.


People thought those techniques would ultimately become something intelligent, so AI, but they fizzled out. That isn't the doubters moving the goalposts, that is the optimists moving the goal posts always thinking what we have now is the golden ticket to truly intelligent systems.


Correct, we don't have AI yet.


Some people are incapable of learning. Therefore, LLMs are AI?

As far as I recall, the turing test was developed long ago to give a practical answer to what was and was not practically artificial intelligence because the debate over the definition is much older than we are


Everyone is capable of learning else they'd have died as a toddler, or any time since when they tried to cross the road.


I think the Turing test is subjective, because the result depends on who was giving the test and for how long.


I mean. "AI" has meant "whatever shiny new computer thing is hot right now" in both common vernacular and every academic field besides AI research basically since the term was coined...


> but it's really depressing

Why do you feel it depressing?


The taxonomy of AI is the following:

AI

-> machine learning

...-> supervised

........-> neural networks

...-> unsupervised

...-> reinforcement

-> massive if/then statements

-> ...

That is to say NN falls under AI but everything falls into AI.


Where did you pull this “taxonomy” from?


Why would it be depressing? Who cares


Because regulating "AI" has the potential to encompass all software development. Many programs act as decision support. From the outside there's little difference between an application that uses conventional programming, ML, RNNs, or GPT.


taking credit from the drivers of tools to give credit to the tools themselves as PR? Yes, depressing. No one says "Unreal Engine Presents Final Fantasy VII". It's an important tool but not the creative mind behind the work.


Should the term AI just not be used until we have a skynet level ai?


No, it should only be used about things that don’t exist.


No, at least in this case, human reveals something, not "AI". Unfortunately people need to use 'AI' to get some attention (no pun intended).


The article itself never uses the term "AI" or "Artificial Intelligence". They do mention the use of a Neural Network as one part of their attempt to help find commonalities in their data set. It is too bad that the Oxford University press person chose to use that word in the title and in the article - badly (in my opinion) characterizing the work.


Probably institutional grant fishing.


I thought this was pretty obvious? "Cancer" is not a thing, its a billion different things that happen differently in every single patient.


Thought I'd include the first line of the article:

> A Cancer Research UK-funded study, published in Cell Genomics, has revealed that prostate cancer, which affects one in eight men in their lifetime, includes two different subtypes termed evotypes.

In some cosmic sense, the number "one billion" and the number "two" are the same, I suppose.


While the parent commenter did exaggerate, they are correct in their idea. You could subclassify cancers all the way down to individual gene mutations, and even then there is heterogeny within the cancer itself.

Medicine tries to draw boundaries where different therapies help differently or where there is different pathophysiology going on. The article was able to draw one such additional boundary. Its relevance is yet to be confirmed with its phenotype or druggability.


I'd guess what they mean is there are two clusters with some clear distinguishing properties, and perhaps some resulting implications for treatment.


it's completely impressive how hackers here are multi-field specialists


I'm always amazed to see all these "well actually" comments on every single post, regardless of the topic :)


and in a year we'll have a new report: "foo bar genomics has revealed that prostate cancer includes 3 new evotypes"

it's all just mutations and there is no upper bound on the number of mutations that can exist


Yes, that's somewhat true, but in practice we have subtypes. As a counterfactual to that assertion, if it were meaningfully a billion different things, then we would need a billion highly precise treatments. Yet, we've managed to do decently with relatively few.


Yes but this is true of nearly every illness, and it’s true of natural phenomena in general, and it’s true of most problems in statistical classification.

But we still like to classify things, because it often has predictive power and informs treatment.


Ok, it’s “obvious”. Now find good drug targets. Yeah, you can’t before you quantify all the “obvious” things. The actual patient data and derived insights are precious.


Not to be rude or anything, but .... no duh? This is why looking for a "cure for cancer" is a bit nonsensical. There are many different ways for cell division to go wrong. Prostate cancer is just a cancer that affects the prostate. There's no reason to assume there would be one pathology for that.


It’s not that you’re wrong, but you miss the depth of the issue. Yes, we know that people are different and there’re many redundant pathways, and every poor bastard probably has his own mutation, etc.

But we need to actually identify the mechanisms, describe them in a lot of detail, and look for very specific biomarkers. That’s what “personalized medicine” is going to be.

It’s extremely difficult to put a study like this together. So many parts can go wrong. So it’s an achievement not just for scientists who published in a good journal, but for the whole humanity.


You are responding to the headline and not the content of the study. Yes, science headlines are stupid clickbait.


Yes, at the time I responded, the headline was much worse. The updated version actually provides some non-obvious information. The original merely said that prostate cancer isn't one disease.


There is still a lot we do not know about why some prostate cancers grow slowly and are effectively benign and some are viciously malignant. This split leads some health practitioners to either relax or ignore screening guidelines. Being able to better narrow things down so we can avoid overly aggressive treatment while at the same time being appropriately aggressive for those that have more malignant variants. (BTW, this is not just theoretical for me)


IANAD, but the early and ongoing successes of immunotherapy across a wide variety of cancers suggests that your characterization of this effort as "nonsensical" is cynically oversimplified, if not wrong.


It's one thing to intuit something but it's another to actually show it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: