I am unsure of the impact of this to the regular consumer as this seems like a pretty niche area. But it's kinda shitty that interpersonal relationships of two companies are impacting their customers negatively.
The onus is not up to public opinion or customer politics to resolve your schoolyard differences. We just want to buy your products and not get loaded with baggage. We don't owe you loyalty on top of the price we paid you for the product.
It reeks a bit of tech self importance. From an overall business context, you balance time/effort vs short/long term reward and design or fix things when you need to.
There are many dials to manipulate in a business to make it more efficient. And not all businesses are tech startups. Software, fortunately or unfortunately, has a lot of room to be inefficient due to cheap hardware as long as it gets the job done in a lot of industries.
I have a technical background as a data scientist and developer. If I look back 5-10 years ago, I can definitely recognize my bias towards over-engineering and premature perfectionism. Identifying that sweet spot between over-designing and unplanned failure is key.
So the queen can lay 3 types: hybrid female, Ibiricus male, structor male. Did they do karyotyping? Is it actually that the queen somehow removed its own genetic material from the nuclei or does it somehow get silenced when the M. structor genetic material is present in the nuclei (which is interesting by itself). Perhaps some kind of complex imprinting is happening.
I think this sentiment has probably been echoed through the ages.
It feels like there's an assumption that we've reached some kind of a complexity ceiling and compressing complexity below us will just make us dumb? What if we've black-boxed complexity below us so we can explore more complexity above us?
Maybe the argument is that the rate of compressing complexity below us is faster than expanding the complexity space above us? And the result is that it makes us run out of knowledge of digest and explore? Perhaps the answer to that is to make people more curious to go out and explore the complexity above us so we can generate that knowledge.
This isn't an unpopular opinion. I would argue this is the mainstream argument.
I think all medical advances benefit the wealthy first and then becomes more affordable over time.
The term "aging" seems to trigger a lot of people and lead to philosophizing over the importance and morality of death. They are important topics to discuss, but I also think it is worthwhile to also hear out the optimist perspectives rather than the endless dystopic cynicism we hear on the daily basis.
It's certainly not the mainstream position here on HN, according to this informal study of provoking commenters with incendiary remarks...
It's true that there are many age-associated diseases that are morally trivial to oppose: a good society should want to minimize preventable suffering. However, dementia, cancer, and cardiovascular research programs already exist, both privately and publicly funded, and these initiatives have existed for many decades without needing to be labeled "aging" research. So let's be clear and refer to these initiatives as life extension rather than anti-aging, because that is the actual goal.
The best optimist narrative I can come up with is as follows: without the looming fear of death over our heads, humanity will be liberated from (a) the grief of losing loved ones, (b) the suffering of old age, and (c) the capacity lost when someone dies. In particular, (c) might mean that geniuses stay productive forever. A little more fancifully, it is sometimes suggested that the value of a human life approaches infinity as human lifespans approach infinity, so the fear of violent death would effectively prevent all violent conflict.
There is then often an emotional appeal about how much more time we would be afforded for exploring the universe and undergoing personal growth; at this point of the conversation you can really tell that the person trying to sell you on the anti-aging agenda is from California, and has tried LSD (or at least pot), and maybe knows a thing or two about Buddhism and Star Trek. (Perhaps they're even fans of Iain M. Banks?) Just think of all the good someone like the Dalai Lama could do if he could literally meditate for centuries, achieving ultimate enlightenment! What if Terry Pratchett and Douglas Adams never died? How can you afford to say no?!
The answer to this all comes to us from a lesser-known member of the _literati_ of the 20th century, an obscure writer called Charlie Chaplin:
> To those who can hear me, I say - do not despair
> The misery that is now upon us is but the passing of greed - the bitterness of men who fear the way of human progress
> The hate of men will pass, and dictators die, and the power they took from the people will return to the people
> And so long as men die, liberty will never perish
In the optimist's world, where everyone gets to live forever, we do not get to pick and choose who attains that status. Josef Stalin, Fidel Castro, and Francisco Franco all died of old age while actively maintaining regimes that actively harmed their people. On the balance, any one individual can do more harm than good.
...And this is not even discussing the problem of population dynamics—how do we maintain balanced numbers? What kind of work will still need to be done? If people stopped aging suddenly, would there be people trapped in shitty jobs for centuries? (Some of this also applies to mind-uploading.)
If the reaction is, "but surely we can advance robotics to achieve fully-automated luxury gay space communism like Iain M. Banks wanted," then let's do that first, before we let a handful of grossly wealthy private equity goons forge the Rings of Power for themselves. There's no rush, right? Right?
It might not be the mainstream on HN, but most popular polls I've seen show similar trends of a lesser proportion of people wanting to live longer, citing the same societal collapse concerns. In any case, whether something is espoused by the majority or the minority doesn't really add much weight.
I don't think there is an "anti-aging agenda". Not everything needs to be seen through the lens of an ideological movement. But I do think that there is an unhealthy persistent cynicism underneath the current popular culture. This cynicism makes people not want to be optimistic/idealistic in fear of being wrong or looking naive. I am not suggesting we should all tint our lenses rose colored, but I do think allowing people to expand their optimistic ceiling is warranted; especially when it is so easy to imagine a dystopic future currently.
Nonetheless, I thoroughly enjoyed your sardonic reply.
At a higher level, MCP seems wants to enforce a standard where no standard exists. I get that the low level technical implementation allows AI to utilize these tools.
But there doesn't seem to be any standardization or method in how to describe the tool to the AI so that it can utilize it well. And I guess part of the power of AI is that you shouldn't need to standardize that? But shouldn't there at least be some way to describe the tool's functionality in natural language or give some context to the tool?
Learning biology is a great insight to how humans think about complex systems. We tend to utilize a reductionist and engineering approach to figure out how things work. This is a perfectly valid approach when building up a complex system from knowable individual parts.
But when analyzing a complex biological system, we tend to make analogies to our own engineered components (motherboards, power sources, circuits). There are definitely a lot of similarities and it is a great way to understand one facet of the system. But it can also sometimes make us lose sight of the intertwining relationships among all of these parts through evolution.
The analogy of our genome to an informational blueprint is one of the best examples of the multi-faceted nature of biology. While the sequence of bases contained within DNA (primary structure) is informational, the complex structures the molecule itself (chromatin structure) also have mechanistic purposes.
We build engineered components to be controllable and independent so we can better assess how the system is working. However, that is not an explicit goal with biology. Biological "components" settle into the best form for the given environment over time even if it creates a potential "mess" of connections and relationships.
I think this is also why biology takes a long time to "sink in" while learning compared to other technical fields. It's very easy to over-train your mental biological model on one facet of the system and lose sight of the others.
It is not uncommon to see the same terminology used in very different ways in various sub-fields of biology. Gene Ontology is also a great example of this as people have found there are biases associated with what the originator lab is studying at the time. Genes with pleiotropic function will tend to get assigned function that's more relevant to what the lab is interested in.
Sometimes I wonder if we are really equipped to navigate it and understand it. Maybe AI/computation is really the only way to try to have a holistic view of the complexities. Perhaps trying to understand biology with our biological brain has inherent limitations like a piece of software trying to understand the hardware it resides in.
> Sometimes I wonder if we are really equipped to navigate it and understand it. Maybe AI/computation is really the only way to try to have a holistic view of the complexities. Perhaps trying to understand biology with our biological brain has inherent limitations like a piece of software trying to understand the hardware it resides in.
You want to learn a musical instrument? If you aren't trying to be Mozart, don't even try. You want to learn some software engineering skills? If you aren't trying to be the next FAANG, don't even try.
How about we let people have fun and make mistakes. We don't need more disingenuous gatekeepers. Technical people can be such a drag sometimes (speaking as a technical person).
> I wouldn't break into someone's house and tell them they painted their house an uninformed color.
but if you, an experienced professional painter, were hired to repaint somebody's house, and that person used bad paint (or something else entirely, like... i dunno, shellac, or white glue), your scope of work changes from "light prep and paint" to a much more involved job of undoing the previous work (or mistakes) and then getting to the original scope of work.
> If something sucks or won't scale, it will sort itself out in the market.
i truly don't mean this in a derogatory way but that sounds incredibly naive.
At the end of the day we are really just disagreeing on how seriously we should take AI-ification of software engineering.
I take a lighter stance that it’s mostly harmless and has a lot of educational positives. I guess you foresee a lot of potential harm.
I do appreciate people who are willing to be the canary in the coal mine. I just find that a lot of the criticism appears to be more gatekeeping than legitimate worries.
I think this is just subject to normal market forces and very product dependent?
If you have engineering skills that are not easily found, then you can ask for more. If the founders don't give it to you, then they might not be able to build their idea.
If the product do not require niche or sophisticated engineering, then the founder will just move on to the hordes of candidates that can do the job.
I don't think "fairness" can be intrinsic (nor objective) in a market-driven economy. It's always going to be a push-pull, negotiation, calculating business. And you disenfranchise yourself out of this economy if you don't have the stomach for it.
The onus is not up to public opinion or customer politics to resolve your schoolyard differences. We just want to buy your products and not get loaded with baggage. We don't owe you loyalty on top of the price we paid you for the product.
It's a bad look for the parties involved.
reply