Hacker News new | past | comments | ask | show | jobs | submit | i_love_limes's comments login

Other high ranking military officers that have worked closely with Trump disagree. I might be inclined to believe them over you, unless you've also worked with Trump? Or are you just someone that he would call a 'sucker'?

https://www.nytimes.com/2024/10/22/us/politics/john-kelly-tr...


Yes you should listen to an actual grift and live in fear.

People like me won’t. You not being able to resonate is what makes you and I different - and one of us capable of defending freedom and the other not.


Ah, so he's a person who built and fought for democracy, but not the right person who built and fought for democracy.

And it's really not hard to find more veterans supporting Harris; just the top two search results:

https://commondefense.us/vets-for-harris

https://votevets.org/press-releases/votevets-makes-historic-...


> and one of us capable of defending freedom and the other not.

Did you just imply that these high ranking military officers are not the ones actually defending everyone's freedoms?

Please stop with the talking points and actually think about what you are repeating again and again.


I have a question that hopefully a molecular biologist can answer. Can tools like this potentially create protein structures that specifically bind in certain cells? Or is this more about a way of being able to create proteins for genes / structures we haven't been able to before?

I'm very interested in my research at the moment in pleiotropy, namely mapping pleiotropic effects in as many *omics/QTL measurements and complex traits as possible. This is really helpful for determining which genes / proteins to focus on for drug development.

The problem with drugs is in fact pleiotropy! A single protein can do quite a lot of things in your body, either through a causal downstream mechanism (vertical pleiotropy), or seemingly independent processes (horizontal). This limits a lot of possible drug target as the side-effect / detrimental effect may be too large.

So, if these tools can create ultra specific protein structures that somehow only bind in the areas of interest, then that would be a truly massive breakthrough.


For anyone who would like to know more about designing proteins with a certain function, target, or structure in mind, the term to search for is "rational design."

https://en.m.wikipedia.org/wiki/Rational_design


Thank you for this, terms of art are the silent gatekeepers...


As an aside, learning the precise terms for concepts in fields in which I'm a layperson (or simply have some cobwebs to shake loose)--and then exploring those terms more--is something that I've found LLMs extraordinarily useful for.


Also "off target effects".


This research is focused on modeling individual protein binding sites. Pleiotropic effects and off-target side effects are caused by interactions beyond the individual binding sites. So I don't think this tool by itself will be able to design a protein that acts in the way you describe (and that's putting aside the delivery concerns - how do you get the protein to the right compartment inside the cell?).

But novel binding domain design could be combined with other tools to achieve this effect. You could imagine engineering a lipid nanoparticle coated in antibodies specific to cell types that express particular surface proteins. So you might use this tool to design both the antibody binding domain on the vector and also the protein encoded by the payload mRNA. Not all cell types can be reached and addressed this way, but many can.


Yes, in principle but there are huge limitations and challenges to using a protein as a drug in living organisms. It has to be injected to avoid digestion, and a protein can't just pass into a cell, it needs to get in somehow. Current peptide drugs like insulin are identical to, or closely mimic natural small peptide hormones that bind to receptors on the outside of a cell. However, there is a possibility of using gene therapy to directly express a novel protein drug inside of the cell. A novel protein is also likely to trigger an immune response- so that type of gene therapy is mostly useful when that is actually desired, e.g. as a vaccine.


they can generate proteins that bind to specific structures with high accuracy, achieving true cell-specificity and avoiding unwanted pleiotropic effects involves many more variables beyond just protein-protein interactions. These tools are more about expanding our ability to target previously "undruggable" proteins rather than solving the cell-specificity problem outright. however they could be valuable components in developing more targeted therapies when combined with comprehensive research on pleiotropic effects across multiple omics levels. real breakthrough will come from integrating these protein design capabilities with a deeper understanding of complex biological systems and developing strategies for precise delivery and regulation of these novel proteins in vivo.


Not an expert, but you could imagine a protein with two receptors that are required for activation. One of them binds to a protein that is only present in the cells of interest, and the other one binds to the actual target.


[flagged]


What LLM wrote this??


So I write a completely defensible rant full of truly interesting and well-informed perspective, get downvoted, and then get accused of being an LLM.

A perspective I'm sure the down-voters have zero cred to doubt. Style issues aside, I don't think there's a serious molecular biologist on the planet who would take issue with the actual gist of what I said.

Pleiotropy: a thing happening can cause more than one other thing to happen. We really need jargon to keep that in mind?

An LLM? This is what I get for writing with passion? Creatively? Daring to play with words? I'm an LLM? For writing anything that doesn't fit your norms? Wow.

How do I get MORE downvotes? They seem like badges of honor in this case.


I suggest you read 'Study Size' under 'Methods'. Detecting an odds ratio of 1.3 with 80% power, and rounding way up to 3000 cases. To me that's not a small sample size, but definitely not the highest OR to aim for. Different cases vs. controls is not a problem for this study design. You pointing that out as a negative makes me think you might not know as much about epi study designs as your comment lets on


You are trying to justify one strand of unethical behaviour (how humans have bred dogs) with another. Not sure that holds up.


Can you unpack why you think breeding dogs - by which I take you to mean all dog breeding, not just the kind that leads to serious physiological issues such as breathing difficulties in pugs - is unethical?


>how humans have bred dogs [was unethical]

This is far from being a consensus, I think.


I'm not sure if I see this line of reasoning. Nature has countless symbionts, parasites etc, are those all unethical?


My ethical framework agrees with you, but ethics, like everything, are relative.


For those not in Academia, it's worth noting that when you get to his level, it's more like being a VP or senior leader. Ultimately, yes you are responsible for the quality of output of your team, but you are absolutely not looking in detail at every paper, more the ideation and focus of the department.

He should be grilling his assistant profs / research fellows to get their act together and raise the bar, but this doesn't show malfeasance.


Important point, thank you.

It doesn't mean we shouldn't work to fix the systemic issue though. It may not he his "fault", but how do we improve our system as a soceity so we put ourself on a upward trajectory not the seemingly downward trajectory we've been on lately.


I'm glad the GP pointed this out, because nobody talks about it. The "PI" of a lab is manager, so these issues are like an accountant embezzling money without the manager knowing. Maybe the manage should have put better processes and monitoring in place, but they were trusting their team members to behave properly, which is not unreasonable IMO.

The real issue is that the people who conduct the fraud are usually grad students or postdocs, whose entire future depends on the success of their research project. Fake results are pretty much guaranteed.

Think about it this way: Imagine you have a lab group with 10 PhD students, each with their own hypothesis to investigate (e.g. intervention A will reduce the rates of disease B in mice). What are the odds that all 10 students will prove their hypothesis and generate publishable results? No way it's 10/10 obviously...it's more like 5/10. So what is supposed to happen to those 5 students that were tasked with investigating the bad hypotheses? Our current system implicitly penalizes these students to the extent that their careers in academia are over, and even earning their PhD is not certain. BTW, I know I am being overly general here, and the student will likely have several parallel projects on-going, can pivot to other things, etc, but hopefully my general point is clear.


It is not unusual, and indeed it is what should be expected, for honors to be accompanied by liabilities.

Ambitious PI's want bigger labs that lead to the recruitment of better students, who then produce more impactful papers, which then support the demand for more funding. It is a positive reinforcement cycle that eventually leads to bigger, better, and more popular labs. Those are the honors.

The liability is that if your name is in the article as a senior co-author, you are just as responsible as the first author for errors or fraudulent research. The senior PI's actual contribution should not matter, their name is there, the publication is used to support their career, they recruited the students or postdocs.


> you are just as responsible as the first author for errors or fraudulent research

I know what you're trying to say, but I think you're making it too black-and-white. There are two nuances I'd like to point out: Firstly the senior author did not actually perpetrate the fraud...this has to mean something when assessing blame I think. Secondly, the senior authors do not really have the ability to filter out fraud, assuming it's done cleverly. What can they do aside from reading the drafts and scrutinizing the data/methods/interpretation? Are they expected to have a team of shadow PhDs doing the same experiments to ensure reproducibility?

No doubt some PIs create an environment that encourages fraud, and that's a problem. But the point I'm trying to make is that if we want to solve the problem of scientific fraud we need to be honest about the source of the problem. In my opinion, it's the fact that a student's entire future is wholly dependent on a good result. The senior author already has a job, probably tenure, and plenty of other projects on the go, so one failed project is not a problem. The cost of failure to the student on the other hand is essentially infinite!


> What can they do aside from reading the drafts and scrutinizing the data/methods/interpretation?

You would surprise how few of the big lab's PI's even do that. And since a big lab, say in biology, can send out 40–50 papers a year, there is no time for the PI to think deeply about hypotheses, methods, data collection. But having a big lab is a decision, as I wrote in my previous comment: honors/grants and liabilities.

> In my opinion, it's the fact that a student's entire future is wholly dependent on a good result.

That's very true, but there is also a thing called personal responsibility. Any non-violent "fraud", any "criminal", has some reasonable motivation behind their actions. But committing fraud is not an inevitability, and a lack of strong punishment that has origins from understanding the motivations behind those actions punishes people who behave, loosely speaking, properly.

Years ago, when I was doing academic research, I asked a colleague of mine if they would change some of their research results if the fraud (a) was never discovered and had no general consequences, (b) led to a publication in Science, Nature, Cell, etc. that would semi-guarantee a tenure-track position, and with that, "bread on the table" for the family, the kids, the aging parents. They said they would never do that, but was it true for them? Would it be true for me? Since the question is legitimate, strong punishment is needed to reduce the occurrence of fraud in research.

And since there is a tenure-track position available for dozens of good applicants, it is natural that a good result will make the difference between having a professional life in academia or not. But is it not the same, with the kind of "good result" depending on the field, for all those fields in which there are many more participants than "winners"? An immediate parallel can be made with doping in sports.


Your point is clear and extremely important. Edison supposedly once said, "I didn't fail. I just discovered 99 ways not to build a light bulb." Or something like that. Among those 99 failures were 20 super meaningful discoveries. In those failures the world's understanding of material science advanced in ways that affected a million later research projects.

A PhD candidate who can prove a hypothesis wrong should often have their work valued as much as one who proved the opposite.

But consider something like the invention of Paxos. If you leave out one small piece, you fail. All that time and effort seems wasted. You haven't proved anything true or false. You've just failed. But if you've documented your failure sufficiently, somebody might come behind you and fix that one little piece you got wrong.

One of the problems with our current system is that three years or ten years of research never gets published or properly documented for posterity because it didn't succeed. Even failures should be written up and packaged for the next grant to extend the exploration. There needs to be some reward for doing that packaging. Maybe we can call it a PhaD (almost PhD). Do you award a PhD to those who take up their own or somebody else's PhaD and complete it successfully?


I had a bit of a eureka on this subject this afternoon: when looking at scientific fraud and who to blame, we (as a society) tend to focus on who stands to gain if the fraud is successful, but instead we should look at who stands to lose the most if the fraud is caught.

Let me explain via a 2x2 matrix (which I highly doubt will render properly, but here goes):

Actor | Fraud is successful | Fraud is caught

------------| -------------- | ----------------

Professor | Scenario A | Scenario B

Student | Scenario C | Scenario D

Scenario A: If a fraud is successful, the senior author gets a small benefit in the form of a slight raise, incremental increase in success rate of next grant, maybe an award, some endorphins from the praise. Very minor actually.

Scenario B: If the fraud is caught the senior author's career could be in shambles, like resigning from their tenured position, losing investors in spin-offs, humiliation, etc. They have a lot to lose.

Scenario C: In the event of a successful fraud, the student stands to gain a lot in the form of job prospects, future income, and generally accomplishing their life's ambitions. There is a huge payoff for the student in this scenario.

Scenario D: If they don't perpetrate fraud (to salvage a bad result), their career in academia is over, they have wasted 3-4 years of the life, which is the same outcome as if they did perpetrate fraud and got caught. The student has nothing to lose!


I ran into this too. I run a very silly slack bot for my friends, and it randomly cycles through pictures that we all have created. Initially it was completely random choice per invocation. Had to change it to a randomly sorted list that was then stored and iterated through until the list is depleted, then it's re-randomised. For the same reason, complaints that actual random choice chose duplicate pictures too often


Note that if your playlist is append-only, you can use format-preserving encryption and just store a seed, a counter, and the length of the list when you started, instead of storing the whole shuffled list.


> Best Things First. By Bjorn Lomborg

Wait, this is the social scientist who wrote The Skeptical Environmentalist, whom a Danish Science committee found to be "scientifically dishonest through misrepresentation of scientific facts, but Lomborg himself not guilty due to his lack of expertise in the fields in question." [1]

And now he's written a book about another topic he doesn't have a deep understanding of. Forgive me if I give it a miss.

[1] https://en.wikipedia.org/wiki/Bj%C3%B8rn_Lomborg#The_Skeptic...


Did you read the rest of the link you provided? The committees decision was overturned:

"the Ministry annulled the DCSD decision, citing procedural errors, including lack of documentation of errors in the book, and asked the DCSD to re-examine the case. In March 2004, the DCSD formally decided not to act further"


A lot of people... in fact a huge portion of statisticians, epidemiologists, econometrics, use it as their primary language.

I do genetic epidemiology (which is considerably more compute intensive than regular epidemiology), and R is still the most common language, with the most libraries and packages being used for it, compared to python for example.

I think maybe you should consider being less forthcoming with your opinions on topics which you are not well informed on.


I worked in data science for a few start ups, and even though I know Python (it's my LeetCode language of choice), R just dominates when it comes to accessing academic methods and computational analysis. If you are going to push the boundaries of what you can and can't analysis for statistical effects and leverage academic learnings, it's R.


In my field (genetic epidemiology), there are annoyingly un-standardised toolsets. There are libraries in R, python, and C/C++ binaries. Being able to string these together in one notebook is helpful.

That being said, I usually just stick to one notebook per thing.


So, this peer reviewed analysis of real data is wishful thinking science compared to your anecdotal observations? I might suggest you rethink your stance based on new information.

It's also interesting, how if school didn't play a factor, putting your kid into a under-performing inner city school wouldn't matter (or conversely, elite private schools wouldn't either)


Do you remember Popeye? Do I have to say more? How long did it take to uncover that spinage does actually not have super-powers? Peer reviewed, dont make me laugh.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: