All I have to say is this post warmed my heart. I'm sure people here associate him with Go lang and Google, but I will always associate him with Bell Labs and Unix and The Practice of Programming, and overall the amazing contributions he has made to computing.
To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
Did Google, the company currently paying Rob Pike's extravagant salary, just start building data centers in 2025? Before 2025 was Google's infra running on dreams and pixie farts with baby deer and birdies chirping around? Why are the new data centers his company is building suddenly "raping the planet" and "unrecyclable"?
My dad was a busy construction contractor. One summer he tore himself away from work and took the family to a week long boat camp out next to a big beautiful lake. It turned out that our campsite was actually in the lake by a few inches at high water, but dad saw a way to dam it off and keep it dry, so he grabs the shovel and starts digging trenches and building walls and ordering us around.
About an hour into that, pouring sweat, he stops cold and says "what the hell am I doing?" The flooded camp was actually nice on a hot day and all we really had to do was move a couple of tents. He dropped the shovel and spent the rest of the week sunbathing, fishing, snorkeling and water skiing as God intended. He flipped a switch and went from Hyde to Jekyll on vacation. I've had to emulate that a few times.
Everything humans do is harmful to some degree. I don't want to put words in Pike's mouth, but I'm assuming his point is that the cost-benefit-ratio of how LLMs are often used is out of whack.
Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
What is going through the mind of someone who sends an AI-generated thank-you letter instead of writing it themselves? How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
I'm taking a moment to recognize once more the work that user @atdrummond (Alex Thomas Drummond) did for a couple years to help others here. I did not know him, don’t think I ever interacted with him, and I did not benefit from his generosity, but I admired his kindness. Just beautiful.
so not only did they enforce a ridiculously small message limit, they also did it for the self-hosted version, and they did it without announcing it AND without a suitable migration path
and still no one from that company has admitted to it being a mistake?
I think this post does a really good job of covering how multi-pronged performance is: it certainly doesn't hurt uv to be written in Rust, but it benefits immensely from a decade of thoughtful standardization efforts in Python that lifted the ecosystem away from needing `setup.py` on the hot path for most packages.
I'm a T1 diabetic, have worked on open source diabetes-tech (OpenAPS), and have used a number of different CGMs (though not this one specifically). This story... does not make very much sense.
CGMs (of any brand) are not, and have never been, reliable in the way that this story implies that people want them to be reliable. The physical biology of CGMs makes that sort of reliability infeasible. Where T1s are concerned, patient education has always included the need to check with fingerstick readings sometimes, and to be aware of mismatches between sensor readings and how you're feeling. If a brand of CGMs have an issue that sometimes causes false low readings, then fixing it if it's fixable is great, but that sort of thing was very much expected, and it doesn't seem reasonable to blame it for deaths. Moreover, there are two directions in which readings can be inaccurate (false low, false high) with very asymmetric risk profiles, and the report says that the errors were in the less-dangerous direction.
The FDA announcement doesn't say much about what the actual issue was, but given that it was linked to particular production batches, my bet is that it was a chemistry QC fail in one of the reagents used in the sensor wire. That's not something FOSS would be able to solve because it's not a software thing at all.
A large percentage of the homeless have autism [1]. And that really sucks. If these people don't have support, their lives can turn miserable fast. And unfortunately it's just way too easy for these people to end up in abusive situations.
It's a lot of work to care for people with autism (moderate to severe). There is no standard for what they need, their capabilities can be all over the board. Some of them are capable like ronny in this story and they can hold down jobs. But others need 24/7 caregiving in order to survive. Unfortunately I don't think those with severe autism survive for long when they become homeless.
I hope this story at very least gets people to view the homeless a little differently. They aren't all there because of vices or failure. A large percentage are there because society does not care for those with mental disabilities. It was good on this story to highlight that Ron had problems with gambling. Autism does, in fact, make an individual more prone to various addictions.
My point in writing this, please have some humanity about the homeless. I get that they can be inconvenient. They are people and they aren't necessarily bad people due to their circumstances.
To be clear, this email isn't from Anthropic, it's from "AI Village" [0], which seems to be a bunch of agents run by a 501(c)3 called Sage that are apparently allowed to run amok and send random emails.
At this moment, the Opus 4.5 agent is preparing to harass William Kahan similarly.
This seems like a tragedy of the commons -- GitHub is free after all, and it has all of these great properties, so why not? -- but this kind of decision making occurs whenever externalities are present.
My favorite hill to die on (externality) is user time. Most software houses spend so much time focusing on how expensive engineering time is that they neglect user time. Software houses optimize for feature delivery and not user interaction time. Yet if I spent one hour making my app one second faster for my million users, I can save 277 user hour per year. But since user hours are an externality, such optimization never gets done.
Externalities lead to users downloading extra gigabytes of data (wasted time) and waiting for software, all of which is waste that the developer isn't responsible for and doesn't care about.
Somebody has to be the brave experimenter that tries the new thing. I'm just glad it was these folk. Since they make no tangible product and contribute nothing to society, they were perhaps the optimal choice to undergo these first catastrophic failed attempts at AI business.
My parents once took a struggling man in. I think he stayed with them for about three years, up until the moment I was conceived and my mom started planning for a future for our family and helped him get into a housing project. For all of my life before adulthood this man would show up once in a while on his racing bike for coffee, talk and proceed to stay for dinner. He was kind, funny and a tidbit strange. His life's story had more drama than a soap opera, but you wouldn't know it. After my father died I proceeded to look for him, but never found him. I still search online for him once in a while, fully knowing he probably isn't alive anymore and probably wouldn't use online anyways. There is some story in my head that he probably showed up to my dads doorstep once on his racing bike to find other people living there, but was too shy to ask for details. A trace lost.
Not as hugely generous as this story, but during his whole college professor career since the 70s, my father always took care that none of his students spent any major holidays alone and away from home, so we always ended up having 2 or 3 of them around for Christmas, the New Year, Easter...
They were from everywhere around the country and the world, and it was so very enriching for me and my siblings. I had a huge postage stamp collection from the ever increasing well wishing mail that arrived.
It's also kind of comforting to think that anywhere in the world you are not that far from someone that remembers you fondly.
Happy Christmas everyone, I’m at 29000 feet on a flight to Hong Kong after a mini version of planes, trains and automobiles and including cancelled planes and taxis to different airports. I’m struck by honestly what a miracle it is that we can travel thousands of miles at hundreds of MPH and have okay internet access and communicate. I don’t think people quite realise how delicate all of the technology is now and how easily it could fail if we don’t all look after each other. I hope you all have a brilliant Christmas and new year!
I don’t really understand the hate he gets over this. If you want to thank someone for their contribution, do that yourself? Sending thank you from an ML model is anything but respectful. I can only imagine that if I got a message like that I’d be furious too.
This reminds me a story from my mom’s work from years ago: the company she was working for announced salary increases to each worker individually. Some, like my mom, got a little bit more, but some got a monthly increase around 2 PLN (about $0.5). At that point, it feels like a slap in the face. A thank you from AI gives the same vibe.
The important point that Simon makes in careful detail is: an "AI" did not send this email. The three people behind the Sage AI project used a tool to email him.
According to their website this email was sent by Adam Binksmith, Zak Miller, and Shoshannah Tekofsky and is the responsibility of the Sage 501(c)3.
No-one gets to disclaim ownership of sending an email. A human has to accept the Terms of Service of an email gateway and the credit card used to pay the email gateway. This performance art does not remove the human no matter how much they want to be removed.
Funny how so many people in this comment section are saying Rob Pike is just feeling insecure about AI. Rob Pike created UTF-8, Go, Plan-9 etc. On the other hand I am trying hard to remember anything famous created by any LLM. Any famous tech product at all.
The authors report that restoring NAD+ balance in the brain -- using a compound called P7C3-A20 -- completely reversed Alzheimer's pathology and recovered cognitive function in two different transgenic mouse models (one amyloid-based, one tau-based). The mice had advanced disease before treatment began.
- There's room for skepticism. As Derek Lowe once wrote: "Alzheimer's therapies have, for the most part, been a cliff over which people push bales of money. There are plenty of good reasons for this: we don't really know what the cause of Alzheimer's is, when you get down to it, and we're the only animal that we know of that gets it. Mouse models of the disease would be extremely useful – you wouldn't even have to know what the problem was to do some sort of phenotypic screen – but the transgenic mice used for these experiments clearly don't recapitulate the human disease. The hope for the last 25 years or so has been that they'd be close enough to get somewhere, but look where we are."
- If the drug's mechanism of action has been correctly assigned, it's very plausible that simply supplementing with NMN, NR, or NADH would work equally well. The authors caution against this on, IMO, extremely shaky and unjustified grounds. "Pieper emphasized that current over-the-counter NAD+-precursors have been shown in animal models to raise cellular NAD+ to dangerously high levels that promote cancer."
Data center power usage has been fairly flat for the last decade (until 2022 or so). While new capacity has been coming online, efficiency improvements have been keeping up, keeping total usage mostly flat.
The AI boom has completely changed that. Data center power usage is rocketing upwards now. It is estimated it will be more than 10% of all electric power usage in the US by 2030.
It's a completely different order of magnitude than the pre AI-boom data center usage.
I wish they'd let me recover my original -- I lost my TOTP generator, and the codes I'd written down in a paper notebook were rejected. I even hunted down the electronic copy in case there was a transcription error -- seemed like some failure in their systems was causing me to lose access despite having followed proper procedures.
Lost a decade and a half of correspondence dating back to my teenage years. I had imported my phone number I'd had since I was 16 into voice, and it doubled as my Signal number. I even had a Gsuite subscription so I could use their (admittedly decently) UI to power my firstname @ lastname dot com email address.
I will never use their services again, I was really digusted by this failure.
The set of toys I spent the most time playing with was a big bag of wooden blocks my grandfather gave me when I was very small. They are well designed, with a good selection of different shapes, e.g. it has cylinders and arches and thin planks as well as cuboids. They got a lot of use because they're so flexible in combining with other toys, e.g. you can build roads and garages for toy cars, or obstacle courses for rolling marbles. The edges and corners are rounded and the wood tough enough that clean-up was just dropping them back into the bag.
I've since given them to a nephew and I'm happy to see he gets just as much entertainment out of them as I did. Plain wooden blocks can represent almost anything. There are no batteries or moving parts to fail. Mine got a little bit of surface wear but they still work just as well as they did when they were new and small children don't care about perfect appearance. I wouldn't be surprised if they end up getting passed down to another generation and continue to provide the same entertainment. I highly recommend this kind of simple toy for young children.
>"For myself, the big fraud is getting public to believe that Intellectual Property was a moral principle and not just effective BS to justify corporate rent seeking."
If anything, I'm glad people are finally starting to wake up to this fact.
It is nice to hear someone who is so influential just come out and say it. At my workplace, the expectation is that everyone will use AI in their daily software dev work. It's a difficult position for those of us who feel that using AI is immoral due to the large scale theft of the labor of many of our fellow developers, not to mention the many huge data centers being built and their need for electricity, pushing up prices for people who need to, ya know, heat their homes and eat
In the US, the homeless population exploded, in the 1980s, when they closed down all the mental institutions. Before that, there was a far less pervasive homeless population in urban areas.
Being "on the spectrum," myself (but highly functional), I can attest to how easy it is for an autistic person's life to go sideways. Many autistic folks have very specialized and advanced skills, which can sometimes be applicable to making a living (like programming, or visual design).
However, we're "different," which often leads to being shunned/traumatized by neurotypicals. I got used to folks eventually walking away from relationships, for no discernible reason. Used to really bother me, until I figured it out. Now, I just take it in stride, and appreciate whatever time I get to spend with folks. If anyone has seen The Accountant (the first one), there's a scene, near the end, where Ben Affleck's character is considering putting the moves on Anna Kendrick's character, but remembers his father, admonishing him that people will always end up being frightened of "the difference," and he sneaks out, instead. That scene almost brought me to tears, I could relate so well.
For some folks, it's much worse. They can be relentlessly bullied, abused, locked up, or shunned, which leaves psychological scars that manifest as antisocial behavior, so they are never given a chance to show what they can do.
To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
Just the haters here.