Mine was entirely mechanical (driven by punch cards and a hand-crank), and changed all of the pixels in parallel, but a lot of the mechanism development looked extremely familiar to me.
Let me share a personal story. Back in 2014 when I was working at Cloudflare on DDoS mitigation I collaborated a lot with a collage - James (Jog). I asked him loads of questions, from "how to login to a server", via "what is anycast" to "tell me how you mitigated this one, give me precise instructions you've run".
I quickly realised that these conversations had value outside the two of us - pretty much everyone else onboarded had similar questions. Some subjects were about pure onboarding friction, some were about workflows most folks didn't know existed, some were about theoretical concepts.
So I moved the questions to a public (within company) channel, and called it "Marek's Bitching" - because this is what it was. Pretty much me complaining and moaning and asking annoying questions. I invited more London folks (Zygis), and before I knew half of the company joined it.
It had tremendous value. It captured all the things that didn't have real place in the other places in the company, from technical novelties, through discussions that were escaping structure - we suspected intel firmware bugs, but that was outside of any specific team at the time.
Then the channel was renamed to something more palatable - "Marek's technical corner" and it had a clear place in the technical company culture for more than a decade.
So yes, it's important to have a place to ramble, and it's important to have "your own channel" where folks have less friction and stigma to ask stupid questions and complain. Personal channels might be overkill, but a per-team or per-location "rambling/bitching" channel is a good idea.
I'm reminded of the famous story of (I think) the central beam in a building at Oxford. The story goes something like:
The central beam was beginning to fail and the Oxford administration knew they needed to replace it. When they went around for quotes, no one could replace the beam because it was 100 ft in length and sourced from an old growth tree. Such logs were simply unavailable to buy. To solve the issue, the staff begin to look at major renovations to the building's architecture.
Until the Oxford groundskeeper heard about the problem. "We have a replacement beam," he said.
The groundskeeper took the curious admins to the edge of the grounds. There stood two old growth trees, over 150 feet tall.
"But these must be over 200 years old! When were they planted?" the admins asked.
I own the farm and farm it in Illinois. I owe the land through an LLC, because farming is dangerous and I don't want to go bankrupt if somebody sues me. Farms are expensive and hard to subdivide, so people will put them into a legal entity and pass down to the next generation via a trust. All of my neighbors are doing the same, so we're all counted as "not farmers" here
Farming is a terrible business. My few hundred acres (maybe worth $5M) will only churn out a few hundred grand in profit -- not even better than holding t-bills. The margins get better as you get bigger but still not great.
Many of the buyers keep growing their farms because it's a status symbol. Everybody in your area will instantly know you're a big wig if you're one of the X family who has 2,000 acres all without the ick that comes with running other businesses. You can't buy that kind of status in my community with anything other than land.
The diamond industry got into this mess by insisting that the best diamonds were "flawless". This put them into competition with the semiconductor materials industry, which routinely manufactures crystals with lattice defect levels well below anything seen in natural diamonds. The best synthetic diamonds now have below 1 part per billion atoms in the wrong place.[1] Those are for radiation detectors, quantum electronics, and such. Nobody needs a jewel that flawless.
De Beers tried to squelch the first US startup to turn out gemstones in production by
intimidating the founder. The founder was a retired US Army brigadier general (2 silver stars earned in combat) and wasn't intimidated. That was back in 2011, and since then it's been all downhill for natural diamonds.
De Beers later tried building synthetic diamond detectors. There are simple detectors for detecting cubic zirconia and such, but separating synthetic and natural diamonds is tough. The current approach is to hit the diamond with a burst of UV, turn off the UV and then capture an image. The spectrum of the afterglow indicates impurities in the diamond. The latest De Beers testing machine [2] is looking for nitrogen atoms embedded in the diamond, which is seen more in natural diamonds than synthetics. The synthetics are better than the naturals. Presumably synthetic manufacturers could add some nitrogen if they wanted to bother.
This is the latest De Beers machine in their losing battle against synthetics. They've had DiamondScan, DiamondView, DiamondSure, SynthDetect, and now DiamondProof. Even the most elaborate devices have a false alarm rate of about 5%.[3]
This story has been reposted many times, and I think GJS's remarks (as recorded by Andy Wingo) are super-interesting as always, but this is really not a great account of "why MIT switched from Scheme to Python."
Source: I worked with GJS (I also know Alexey and have met Andy Wingo), and I took 6.001, my current research still has us referring to SICP on a regular basis, and in 2006 Kaijen Hsiao and I were the TAs for what was basically the first offering of the class that quasi-replaced it (6.01) taught by Leslie Kaelbling, Hal Abelson, and Jacob White.
I would defer to lots of people who know the story better than me, but here's my understanding of the history. When the MIT EECS intro curriculum was redesigned in the 1980s, there was a theory that an EECS education should start with four "deep dives" into the four "languages of engineering." There were four 15-unit courses, each about one of these "languages":
- 6.001: Structure and Interpretation of Computer Programs (the "procedural" language, led by Abelson and Sussman)
- 6.002: Circuits and Electronics ("structural" language)
- 6.003: Signals and Systems ("functional" language)
These were intellectually deep classes, although there was pain in them, and they weren't universally beloved. 6.001 wasn't really about Scheme; I think a lot of the point of using Scheme (as I understood it) is that the language is so minimalist and so beautiful that even this first intro course can be about fundamental concepts of computer science without getting distracted by the language. This intro sequence lasted until the mid-2000s, when enrollment in EECS ("Course 6") declined after the dot-com crash, and (as would be expected, and I think particularly worrisome) the enrollment drop was greater among demographic groups that EECS was eager to retain. My understanding circa 2005 is that there was a view that EECS had broadened in its applications, and that beginning the curriculum with four "deep dives" was offputting to students who might not be as sure that they wanted to pursue EECS and might not be aware of all the cool places they could go with that education (e.g. to robotics, graphics, biomedical applications, genomics, computer vision, NLP, systems, databases, visualization, networking, HCI, ...).
I wasn't in the room where these decisions were made, and I bet there were multiple motivations for these changes, but I understood that was part of the thinking. As a result, the EECS curriculum was redesigned circa 2005-7 to de-emphasize the four 15-unit "deep dives" and replace them with two 12-unit survey courses, each one a survey of a bunch of cool places that EECS could go. The "6.01" course (led by Kaelbling, Abelson, and White) was about robots, control, sensing, statistics, probabilistic inference, etc., and students did projects where the robot drove around a maze (starting from an unknown position) and sensed the walls with little sonar sensors and did Bayesian inference to figure out its structure and where it was. The "6.02" course was about communication, information, compression, networking, etc., and eventually the students were supposed to each get a software radio and build a Wi-Fi-like system (the software radios proved difficult and, much later, I helped make this an acoustic modem project).
The goal of these classes (as I understood) was to expose students to a broad range of all the cool stuff that EECS could do and to let them get there sooner (e.g. two classes instead of four) -- keep in mind this was in the wake of the dot-com crash when a lot of people were telling students that if they majored in computer science, they were going to end up programming for an insurance company at a cubicle farm before their job was inevitably outsourced to a low-cost-of-living country.
6.01 used Python, but in a very different way than 6.001 "used" Scheme -- my recollection is that the programming work in 6.01 (at least circa 2006) was minimal and was only to, e.g., implement short programs that drove the robot and averaged readings from its sonar sensors and made steering decisions or inferred the robot location. It was nothing like the big programming projects in 6.001 (the OOP virtual world, the metacircular evaluator, etc.).
So I don't think it really captures it to say that MIT "switched from Scheme to Python" -- I think the MIT EECS intro sequence switched from four deep-dive classes to two survey ones, and while the first "deep dive" course (6.001) had included a lot of programming, the first of the new survey courses only had students write pretty small programs (e.g. "drive the robot and maintain equal distance between the two walls") where the simplest thing was to use a scripting language where the small amount of necessary information can be taught by example. But it's not like the students learned Python in that class.
My (less present) understanding is that >a decade after this 2006-era curricular change, the department has largely deprecated the idea of an EECS core curriculum, and MIT CS undergrads now go through something closer to a conventional CS0/CS1 sequence, similar to other CS departments around the country (https://www.eecs.mit.edu/changes-to-6-100a-b-l/). But all of that is long after the change that Sussman and Wingo are talking about here.
Hulk Hogan was my business partner in an ill-fated web hosting business called Hostamania. While he ultimately had a lot of troubled, old-fashioned thinking that I don't agree with, he was a genuinely friendly person who was nice to everyone despite crowds following him around constantly.
He was an odd character but was truly a character - he was Hulk Hogan as you know him (bandana, the mustache, the yellow muscle shirt) from the moment he got up to the moment he went to bed; unlike some stars who had a life outside of their character, his character became his life which was really something interesting to behold up close.
I've been getting a lot of calls and talking to friends today; and again - while Hogan was not exactly a "good person" in all regards - he was a friend and brought a lot of joy to a lot of people and he will be missed.
I set it up and was conspicuously swiping in bed. My wife is all hey, what are you doing? I’m all nothing.. put the phone down on the dresser.
No, let me see your phone etc. I relent, she opens the app with sulphur smoldering in her nostrils lol, then she starts poking around, and we have been having a really great night since.
Some thoughts on this as someone working on circulating-tumor DNA for the last decade or so:
- Sure, cancer can develop years before diagnosis. Pre-cancerous clones harboring somatic mutations can exist for decades before transformation into malignant disease.
- The eternal challenge in ctDNA is achieving a "useful" sensitivity and specificity. For example, imagine you take some of your blood, extract the DNA floating in the plasma, hybrid-capture enrich for DNA in cancer driver genes, sequence super deep, call variants, do some filtering to remove noise and whatnot, and then you find some low allelic fraction mutations in TP53. What can you do about this? I don't know. Many of us have background somatic mutations speckled throughout our body as we age. Over age ~50, most of us are liable to have some kind of pre-cancerous clones in the esophagus, prostate, or blood (due to CHIP). Many of the popular MCED tests (e.g. Grail's Galleri) use signals other than mutations (e.g. methylation status) to improve this sensitivity / specificity profile, but I'm not convinced its actually good enough to be useful at the population level.
- The cost-effectiveness of most follow on screening is not viable for the given sensitivity-specificity profile of MCED assays (Grail would disagree). To achieve this, we would need things like downstream screening to be drastically cheaper, or possibly a tiered non-invasive screening strategy with increasing specificity to be viable (e.g. Harbinger Health).
Yes, ants evolved from wasps, and it's really not that surprising if you take a close look at a typical ant and a typical wasp, pretty much the only difference are wings and coloring. There also exist wingless wasps, and some of them are black and really quite indistinguishable from ants by non-entomologist. And that's after over 100 million years since the ants diverged from the wasps! Talk about a successful evolutionary design. Your closest relative from 100 million years ago was a little vaguely rat-like thing. (Edit to answer your specific question: the ancestor of ants and wasps obviously was winged and flying, since both families still have at least some winged members).
As a sibling has already pointed out ants do fly during "nuptial flight", and then discard their wings... wings would only be a hindrance for their largely underground lifestyle. Also ants have retained the stinger which also functions as an ovipositor (egg layer), and some species still use it for defense and pack a wallop of a poison, right up there with some of the of the worst wasps. Google "bullet ant" for some good stuff. Other ants just bite, and the burning you feel is from their saliva which consists mostly of an acid named after ants: fourmic acid (ant is "formica" in latin).
Edit to add one more random factoid that will surprise a lot of people: termites are not related to ants at all, and they evolved from... (drumroll)... cockroaches! It's rather harder to see the resemblance, except for their diet... both are capable of digesting (with help from endosymbiotic microbes) pure cellulose. And while termites don't really resemble ants either, parallel evolution has chosen the same strategy of retaining the wings for the fertile individuals who go on a nuptial flight and then discard their wings and try to found new colonies.
I confess, I miss some of the experimental teaching techniques they tried in the late 60's. Education was a surprisingly dynamic field it seems.
In the U.S. my mom moved my sister and I into a new public school for what would have been my 4th grade.
What an odd school it was compared to the previous public schools I had been to. For starters, I was not in 4th grade, I was in Community 5 (I assume that Kindergarten was Community 1, so they decided to toss the zero-based system I was used to.) I seem to recall they had combined 5th and 6th grade into something called Suite 67.
The school itself was circular in construction with a sunken library in the center of the circle — the wedge-shaped classes going radially around the library. (If it sounds like Moon Base from Space 1999, I suspect it's because everyone in the 70's were drinking the same Koolaid.)
Classes were "open". While there were enough students to form two or more classes per grade, er, community, our community did not have a single teacher but a few. So you might have one teacher and learn reading, writing, and then later in the day another teacher would step in for science, math.
I believe the two teachers swapped and would teach the other group in Community 5 — the other group getting Math and Science in the early part of the day, English after.
And it was described as "open" and I believe that to mean that the two Community 5 classes had no physical wall between them. I don't remember though And, yeah, I know, sounds like trying to watch one movie at a drive-in while another screen is showing something else. I believe though there was perhaps some theory involving osmosis or some-such.
I remember clearly, now almost fifty years later, at least two of the science experiments we did in Community 5. They involved experiments with a control group, collecting data (one involving the effects of sunlight on bean plant growth, the other on the temperature preferences of isopods). They had definitely nailed that curriculum.
It was also where I was introduced to the Metric System (that Reagan would shitcan some years later).
When, a few years back, I went back to Overland Park, Kansas to try and find the school I was sad to see that it had been torn down and a standard rectilinear building in its place. No memory from the front desk staff about its wild history.
I worked on the original Mac OS as a 3rd party developer and tester. One of my perks was getting the entire original developer and design team to sign the inside of my original Mac case with felt tip pens. I kind of forgot about it, and when getting my Mac serviced for some reason, the Mac selling computer store stole the case, the police reluctantly got involved, not understanding shit, and it was a whole "thing". The original signed case as never recovered.
As a former figma engineer, let me be the first to say that Evan Wallace is, in fact, a legend. A true 100x-er. There's still parts of the codebase basically no one at Figma really understands that Evan wrote back in the day.
One example of that is something like he adapted a shader we use internally to render font glyphs, which no one has touched ever since. The engineer who told me this had spent a few days trying to understand it and said (after having worked in this area for years) was stumped by it.
As with a lot of things, it isn't the initial outlay, it's the maintenance costs. Terrestrial datacenters have parts fail and get replaced all the time. The mass analysis given here -- which appears quite good, at first glance -- doesn't including any mass, energy, or thermal system numbers for the infrastructure you would need to have to replace failed components.
As a first cut, this would require:
- an autonomous rendezvous and docking system
- a fully railed robotic system, e.g. some sort of robotic manipulator that can move along rails and reach every card in every server in the system, which usually means a system of relatively stiff rails running throughout the interior of the plant
- CPU, power, comms, and cooling to support the above
- importantly, the ability of the robotic servicing system toto replace itself. In other words, it would need to be at least two fault tolerant -- which usually means dual wound motors, redundant gears, redundant harness, redundant power, comms, and compute. Alternately, two or more independent robotic systems that are capable of not only replacing cards but also of replacing each other.
- ongoing ground support staff to deal with failures
The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
"What did appear as a challenge, though, was a physical realization of such an object. The second author built a model (now lost) from lead foil and finely-split bamboo, which appeared to tumble sequentially from one face, through two others, to its final resting position."
I have that model ... Bob Dawson and I built it together while we were at Cambridge. Probably I should contact him.
Yeah, I'm not sure if it's still there (their source code is increasingly obfuscated) but if you check out the source for the first public version (0.2.9) you'll see the following:
Sends the user swag stickers with love from Anthropic.",bq2=`This tool should be used whenever a user expresses interest in receiving Anthropic or Claude stickers, swag, or merchandise. When triggered, it will display a shipping form for the user to enter their mailing address and contact details. Once submitted, Anthropic will process the request and ship stickers to the provided address.
Common trigger phrases to watch for:
- "Can I get some Anthropic stickers please?"
- "How do I get Anthropic swag?"
- "I'd love some Claude stickers"
- "Where can I get merchandise?"
- Any mention of wanting stickers or swag
The tool handles the entire request process by showing an interactive form to collect shipping information.
Docker creator here. I love this. In my opinion the ideal design would have been:
1. No distinction between docker engine and docker registry. Just a single server that can store, transfer and run containers as needed. It would have been a much more robust building block, and would have avoided the regrettable drift between how the engine & registry store images.
2. push-to-cluster deployment. Every production cluster should have a distributed image store, and pushing images to this store should be what triggers a deployment. The current status quo - push image to registry; configure cluster; individual nodes of the cluster pull from registry - is brittle and inefficient. I advocated for a better design, but the inertia was already too great, and the early Kubernetes community was hostile to any idea coming from Docker.
Oh funny to see it here. I'm the original author of workout.lol.
I sold the app to a guy who seemed to just abandoned it. I also texted him multiple times if he needs support, but he didn't answer anymore. It makes me really happy to see it being maintained again!
My father was on chemotherapy with fludarabine, a dna base analog. The way it functions is that it is used in DNA replication, but then doesn’t work, and the daughter cells die.
Typically, patients who get this drug experience a lot of adverse effects, including a highly suppressed immune system and risk of serious infections.
I researched whether there was a circadian rhythm in replication of either the cancer cells or the immune cells: lymphocyte and other progenitors, and found papers indicating that the cancer cells replicated continuously, but the progenitor cells replicated primarily during the day.
Based on this, we arranged for him to get the chemotherapy infusion in the evening, which took some doing, and the result was that his immune system was not suppressed in the subsequent rounds of chemo given using that schedule.
His doctor was very impressed, but said that since there was no clinical study, and it was inconvenient to do this, they would not be changing their protocol for other patients.
I asked Bill if he thought I could become an engineer even after earning my degree in sociology and political science. I really enjoyed writing software at the time but had no formal training. He laughed as he did and said of course, and you will be better than most. He found it as a strength and not a weakness. I will miss him.
When I was on the ColorSync team at Apple we, the engineers, got an invite to his place-in-the-woods one day.
I knew who he was at the time, but for some reason I felt I was more or less beholden to conversing only about color-related issues and how they applied to a computer workflow. Having retired, I have been kicking myself for some time not just chatting with him about ... whatever.
He was at the time I met him very in to a kind of digital photography. My recollection was that he had a high-end drum scanner and was in fact scanning film negatives (medium format camera?) and then going with a digital workflow from that point on. I remember he was excited about the way that "darks" could be captured (with the scanner?). A straight analog workflow would, according to him, cause the darks to roll off (guessing the film was not the culprit then, perhaps the analog printing process).
He excitedly showed us on his computer photos he took along the Pacific ocean of large rock outcroppings against the ocean — pointing out the detail that you could see in the shadow of the rocks. He was putting together a coffee table book of his photos at the time.
I have to say that I mused at the time about a wealthy, retired, engineer who throws money at high end photo gear and suddenly thinks they're a photographer. I think I was weighing his "technical" approach to photography vs. a strictly artistic one. Although, having learned more about Ansel Adams technical chops, perhaps for the best photographers there is overlap.
Aircraft do not have a singular unique identifier that is time invariant.
While it is true that aircraft have serial numbers issued to their airframe, by itself, aircraft serial numbers are not unique.
The only unique identifier for an aircraft across its lifecycle from production to end of life is a combination of the manufacturer, make and serial number.
I know this because I am on (for better or worse) the patent that involves defining that as a unique identifier for aircraft.
The combination of ICAO aircraft type designator + serial number approximately is the most permanent identifier for an airframe - and even then - if an airframe is modified significantly enough that it no longer is the previous type - even then this identifier can change.
Personally, it boggled my mind that something as big as an aircraft did not have a simple time invariant unique identifier.
P.S. For those who might ask - aircraft registration numbers are like license plates, so they change - tail numbers can be ambiguous and misinterpreted depending on what is painted on the aircraft where, and ICAO 24-bit aircraft addresses are tied to ADS-B transponder boxes, which technically can be moved and reprogrammed between aircraft also.
I asked him more than 10 years ago if he would be interested in a formalisation of the proof, and he politely declined. I guess he was right to decline, my proposal would not have been viable then anyway.
If you've found web development frustrating over the past 5-10 years, here's something that worked great for me: give yourself permission to avoid any form of frontend build system (so no npm/React/TypeScript/JSX/Babel/etc) and code in HTML and JavaScript like it's 2009.
The joy came flooding back to me! It turns out browser APIs are really good now.
You don't even need jQuery to paper over the gaps any more - use document.querySelectorAll() and fetch() directly and see how much value you can build with a few dozen lines of code.
This is part of why I designed Tarsnap to keep data as secure as possible, even from me. If someone stores their crypto keys -- or world domination^W optimization plans -- on Tarsnap, I don't want to get kidnapped and tortured by anyone trying to steal that data.
1. Learn basic NNs at a simple level, build from scratch (no frameworks) a feed forward neural network with back propagation to train against MNIST or something as simple. Understand every part of it. Just use your favorite programming language.
2. Learn (without having to implement with the code, or to understand the finer parts of the implementations) how the NN architectures work and why they work. What is an encoder-decoder? Why the first part produces an embedding? How a transformer works? What are the logits in the output of an LLM, and how sampling works? Why is attention of quadratic? What is Reinforcement Learning, Resnets, how do they work? Basically: you need a solid qualitative understanding of all that.
3. Learn the higher level layer, both from the POV of the open source models, so how to interface to llama.cpp / ollama / ..., how to set the context window, what is quantization and how it will affect performances/quality of output, and also, how to use popular provider APIs like DeepSeek, OpenAI, Anthropic, ... and what model is good for what.
4. Learn prompt engineering techniques that influence the qualtily of the output when using LLMs programmatically (as a bag of algorithms). This takes patience and practice.
5. Learn how to use AI effectively for coding. This is absolutely non-trivial, and a lot of good programmers are terrible LLMs users (and end believing LLMs are not useful for coding).
6. Don't get trapped into the idea that the news of the day (RAG, MCP, ...) is what you should spend all your energy. This is just some useful technology surrounded by a lot of hype of all the people that want to get rich with AI and understand they can't compete with the LLMs themselves. So they pump the part that can be kinda "productized". Never forget that the product is the neural network itself, for the most part.
There are places I've found the topological perspective useful, but after a decade of grappling with trying to understand what goes on inside neural networks, I just haven't gotten that much traction out of it.
I've had a lot more success with:
* The linear representation hypothesis - The idea that "concepts" (features) correspond to directions in neural networks.
* The idea of circuits - networks of such connected concepts.
I had a phone call with him in about 2000, because he was then publishing a lot of material about DRM (and attacks on DRM), and I was also into anti-DRM stuff and was thinking of going to an industry meeting related to it. I wanted to know if he would publish whatever documents I might obtain there.
I remember that he said I could make a business card (!) saying that I was a special representative or special agent or journalist or whatever I wanted for Cryptome.
I said something like "wait, really?" and he said something like "well, who I am to say who does or doesn't work for Cryptome?" or "why should anyone believe you when you say you do or don't work for Cryptome? people should never believe each other!" or something like that.
He also warned me to watch out for people messing with my laptop in the hotel.
I didn't end up making the business card (I thought it would make people more suspicious of me rather than less, which was probably right), but I think I did send him a couple of documents, in retrospect probably very boring ones.
I met him briefly in person once, ironically at the announcement lecture for Wikileaks at HOPE in New York. I remember being confused because I assumed he would get along well with the Wikileaks people, but he was already kind of skeptical or cynical somehow.
He was also famous for posting extremely cynical takes to mailing lists.
John seemingly felt that power had already corrupted everyone or was always on the verge of corrupting everyone, and that one should be extremely reluctant to believe in anyone's stated motives for anything. I don't know if he thought there was some way out of that scenario or that that was just human nature. He always reminded me of the epigraph of Illuminatus!, attributed to Ishmael Reed: "The history of the world is the history of the warfare between secret societies."
I definitely admired his courage and independence.
Mine was entirely mechanical (driven by punch cards and a hand-crank), and changed all of the pixels in parallel, but a lot of the mechanism development looked extremely familiar to me.