This is a good insight, though I wouldn't limit it to software engineering.
I've discussed with my therapist many times that my biggest mental health challenges are from the exact same personality traits that bring me my greatest joy and value. Every maladaptive trait has its adaptive aspects and vice versa. If I were to try to eliminate those maladaptive aspects, I'd probably lose much of the adaptive side as well.
There is a real zen to being able to note that the things which cause the most anguish also cause the most joy and accepting both sides of that coin at the same time.
It tickles me that this quote came from a YA novel of all places, but in The Perks of Being a Wallflower, Chbosky writes "We accept the love we think we deserve".
If that isn't one of the deepest aphorisms on psychology out there, I don't know what is.
Studios don't want to license content to Netflix now that they are direct competitors, so Netflix has fewer and fewer movies and shows that they didn't produce themselves. And they want to spend as little as possible on producing their own content.
That way they make as much profit from the subscriptions as they can.
2. Reducing the value of competitors.
They are competing for user time. They want you to spend as many minutes as possible on Netflix because any minute not spent their is a minute you might be spending on Hulu or Apple TV. At the end of the month when you decide that you can't afford that many streaming services and decide to cut one, you'll pick based on which one you use the most. They don't want that to be the other guy.
Using a hierarchy to show template errors is brilliant and I'm sort of surprised compilers haven't always done that.
I was investigating C++-style templates for a hobby language of mine and SFINAE is an important property to make them work in realistic codebases, but leads to exactly this problem. When a compile error occurs, there isn't a single cause, or even a linear chain of causes, but an potentially arbitrarily large tree of them.
For example, it sees a call to foo() and there is a template foo(). It tries to instantiate that but the body of foo() calls bar(). It tries to resolve that and finds a template bar() which it tries to instantiate, and so on.
The compiler is basically searching the entire tree of possible instantiations/overloads and backtracking when it hits dead ends.
Yes, it's definitely nice to be able to typecheck generic code before instantiation. But supporting that ends up adding a lot of complexity to the typesystem.
C++-style templates are sort of like "compile-time dynamic types" where the type system is much simpler because you can just write templates that try to do stuff and if the instantiation works, it works.
C++ templates are more powerful than generics in most other languages, while not having to deal with covariance/contravariance, bounded quantification, F-bounded quantification, traits, and all sorts of other complex machinery that Java, C#, etc. have.
I still generally prefer languages that do the type-checking before instantiation, but I think C++ picks a really interesting point in the design space.
Please no. Templates being lazy is so much better than Rust's eager trait evaluation, the latter causing incredible amounts of pain beyond a certain complexity threshold.
How so? I'd really like to see an example. If you can't explain what requirements your function has before calling it then how do you know it even does what you expect?
Trump thinks of himself as a powerful, insightful deal-maker. He pathologically needs to feel like he is doing something and his comfort zone for what kinds of things to do is "stuff that feels like business deals".
He knows very little about government and policy, but tariffs and dollars and percentages feels familiar to him. He's got a lever, and he can't resist yanking on it to show that he's in charge.
Whether yanking on the lever actually benefits the country or him is entirely besides the point. He's a narcissist and narcissists can't let go.
This is, I think, one of the deepest arts in user experience design. It comes up in games, but I find myself applying it all the time when designing a programming language too.
A human sitting at a machine and pushing buttons is expressing some sort of intention. When the machine can do exactly what they requested, easy, do that.
But what happens if their intentions don't seem to make sense given the current state of the system and/or their previously indicated intentions?
In the context of Tetris, it's them trying to rotate a piece when they've moved it too close to the wall to do that. In a programming language, it can be that they've defined a function that's declared to take a string and then they wrote a call that passes it a number.
Sometimes, you can make a pretty confident guess as to what they are trying to do and have the system either do that instead or at least use that assumption to explain why the system can't do what they requested.
But deciding when a guess is a safe bet for the system to make and when it will be wrong often enough to confuse users is really hard, especially as the complexity of the system and the diversity of the userbase scales up.
Some species have properties that seem way beyond what would be necessary for evolutionary success. For example, an inland taipan has enough venom in a single bite to kill 100 people. There is no imaginable situation where a taipan is going to need to bite an entire village and take them all out. Why on Earth did it evolve such insanely strong venom?
When you dig into it, these outliers are often the result of an evolutionary arms race [1]. In teh case of the inland taipan, they are often prey for mulga snake and perenties, who have evolved immunity to their venom. So you've got a feedback loop where the taipan keeps evolving stronger venom to fight back against predators, who continue evolving stronger immunity to the same venom. Run that loop a million years and you get a snake who can kill a busload of people.
Human intelligence is another such outlier. I know it's popular to talk about how animal intelligence is underestimated but even so, human intelligence is just astronimcally greater than any other species. Sure a squirrel can find a bunch of nuts it buried. Humans have built machines and landed them on other planets. Our intelligence is orders of magnitude greater than any other species.
My pet hypothesis for years is that this must be the result of an evolutionary arms race within the species. Humans are a profoundly social species. We are mostly too fragile to survive in the wild on our own. The functioning survival unit of humans is a hunter-gatherer group. We are sort of like eusocial animals like ants.
But unlike ants who can mostly rely on simple chemical signals to tell which other ants are part of their anthill, we have to rely on social cues. There is a very strong incentive to be able to deceive humans in other groups and infiltrate or sabotage their group. If you're smart enough to sneak in, you can steal a lot or do a lot of damage. Likewise, there's an equally strong incentive to be able to suss those bad actors out to prevent them from doing that. The smarter you are, and the better you're able to remember people and describe them to others, the harder it is to get taken advantage of.
Turn that evolutionary crank a few hundred thousand years, and you get a species so smart that the only other animals that can possibly hope to compete with them in terms of intelligence are other Homo sapiens.
If we weren't so deeply social, I don't think we'd be so smart. We have these huge brains in order to navigate the fantastically complex social world which we have in turn created by having these huge brains.
I suspect we are not orders of magnitudes more intelligent than some of our cetacean friends in the deep sea.
At the end of the day, all the achievements you've listed are not only the result of massive intelligence but opposable-thumb hands (for tool use and writing) and fire (for an easy early source of energy to runaway with).
If humans were just as intelligent but never had these, we'd never reach the moon or build machines.
I'd argue that it's not intelligence alone that lets us land a machine on the moon — it's our cooperation, written language, learned culture, communication. An individual person cannot land a machine on the moon on their own. It is not intelligence that produces that, but intelligence contained and continually improved upon within a superorganism of social culture.
Yes, but it requires intelligence for cooperation to be evolutionarily viable. Dumb but cooperative animals are too easy for a freeloader to take advantage of.
And it obviously requires massive intelligence for written language and high fidelity communication.
I think that there are two sides of an intelligence, intelligence as an inborn ability to become intelligent and intelligence as the result of the process of becoming intelligent. People are more intelligent then animals in the first sense, AND they developed a process of training the "raw" inborn intelligence into an adult mind.
> My pet hypothesis for years is that this must be the result of an evolutionary arms race within the species.
Frans de Waal studied primates and defended just this hypothesis. He wrote a bunch of books, so you may be interested in reading at least some of them.
BTW, my pet hypothesis is the social roots of our intelligence is the reason why people mostly hopelessly dumb. They kinda go on intuition and it works in normal social situations, but when it comes to politics or science the intuition fails them. One needs to train their mind to stick to logic and rationality, without taking sides which seem the most beneficial for them. But training is hard and training can easily be overcome by the intuition, which is the natural way for brain to work. You need to keep your brain working against its nature to not lose your intelligence in situations which are not hierarchy games.
In Japan, men commit suicide at roughly twice the rate of women. The age group with the highest rate of suicide was 50-59. I can't find good data, but loneliness and not feeling valued by a community is very likely a significant contributor here.
Women are important but if the problem you're trying to solve is deaths of despair, then focusing on men makes sense.
Suicide among men is ~4x higher in Canada and the USA in some demographics, too. In some cases, such as men 80+, the rate is 6.5x higher in Canada. This is crazy sad. It doesn't mean other things affecting women aren't sad. It just means this is sad.
"We wanted to strengthen the connection between the children and the older generations in the community. There are so many amazing people here. I thought it was such a shame that no one knew about them."
Older people being forgotten and unknown seems relevant to me.
Honestly, seriously. Imagine some weird Thanos showed up, snapped his fingers and every single bit of generative AI software/models/papers/etc. were wiped from the Earth forever.
Would that world be measurably worse in any way in terms of meaningful satisfying lives for people? Yes, you might have to hand draw (poorly) your D&D character.
But if you wanted to read a story, or look at an image, you'd have to actually connect with a human who made that thing. That human would in turn have an audience for people to experience the thing they made.
Imagine a world where Thanos snapped his fingers and photoshop (along with every digital application like it) was wiped from Earth forever. The world would keep on turning and artists would keep on creating, but creating art would be more difficult and fewer people would be able to do it (or even touch up their own photos).
Would that world be so bad? Was the world really so horrible before photoshop existed?
What if we lost youtube? What if we lost MP3s?
We could lose a lot of things we didn't always have and we'd still survive, but that doesn't mean that those things aren't worth having or that we shouldn't want them.
Obviously the former status quo wasn’t that bad. But the opposite is also true, AI democratizes access to pop culture. So now when I connect with a human it’s not to share memes, it’s higher order. IOW we can spend more time playing D&D because we didn't have to draw our characters.
It's a rhetorical example. Suppose you need to create an avatar of your character. Why does it follow that it's not beneficial to have an AI help generate the avatar?
You're responding to the specific example, not the general argument. Unless your counter is that whatever humanity is doing that AI is helping is probably stupid and shouldn't be done anyway.
> Unless your counter is that whatever humanity is doing that AI is helping is probably stupid and shouldn't be done anyway.
No, my counter is that whatever generative AI is doing is worth doing by humans but not worth doing by machines.
As the joke comic says: We thought technology was going to automate running errands so that we had time to make art, but instead it automates making art while we all have to be gig workers running errands.
> No, my counter is that whatever generative AI is doing is worth doing by humans but not worth doing by machines.
There is no basis to this claim, why is one worth doing by a biological machine but not a silicon one? People cling too highly to biological exceptionalism not understanding that one arose due to certain processes in the world and universe where somewhere else we might have been silicon beings all along. That is to say, people have huge amounts of cognitive dissonance thinking that they are actually simply machines of a biological variety.
> As the joke comic says: We thought technology was going to automate running errands so that we had time to make art, but instead it automates making art while we all have to be gig workers running errands.
Hardware is harder than software. Soon gig workers will be automated by AI too. I have heard this refrain a thousand times but it never ceases to make me think that it's in a specific time and place of the early 21 century. In the 22nd century, given such progress, we might talk of these discussions the same way artists and weavers did in (and of) the early 20th.
Same in mine. But mine isn’t predicated on the irreplaceability of human labor to derive value from human life. If we automated literally everything and we could all just live off UBI and drink wine and look at the stars all day, humans would still be intrinsically valuable. Or, the ability of a machine to generate art good enough to serve as a fun D&D avatar does not devalue a human doing the same. You may be attaching a… market value… to humans by proxy of their capital output. Very capitalist of you. If you look at things that way, the value of a human life has been trending toward zero and will continue. So I prefer not to hold a belief system that only values humans by the value of their labor. Therefore I am not bothered when we invent a new tool that might compete with human labor.
You are interpreting "labor" in a purely economic sense, but that's choice of framing.
What I'm getting at is that our actions are rewarding and meaningful when we put effort into them and they provide value (in the general, not economic sense) to others.
If I spend a day drawing you a picture, you get a warm fuzzy feeling because of how much I must care to sacrifice one of my finite days on Earth to make a thing just for you. If I spend ten seconds writing a prompt and an AI spits out an objectively prettier picture, it's still less meaningful and less valuable in every sense that matters. I gave up nothing to produce it and you gained little by having it.
> Therefore I am not bothered when we invent a new tool that might compete with human labor.
This is likely a luxury you have by being economically stable enough to not have to worry about how you're going to put food in your stomach today. While it's fun to imagine idyllic post-capitalist societies, artists today need to be able to afford shelter and healthcare. Generative AI will destroy their livelihood.
That may be a sacrifice you are willing to make (since it likely isn't coming to take your job), but I care too much about other people to be delighted by that.
> No one needs an avatar. You can draw a stick figure or take a selfie or whatever. This is all so silly and trivial.
This does not answer the question, this sidesteps the requirements. I can either have no avatar, or I can make one for the marginal cost of zero via an AI. This then belies that the true complaint of people against AI is one of an economic variety, not a technological one, in which case, fix the economics, and support open source AI so that all may have such tech, not just big tech.
OK? What does that have to do with pop culture IP rights?
If you're building an LLM for management or technical consulting then the valuable content is locked up behind corporate firewalls anyway so you're going to have to pay to use it. In that field most of what you could find with a web crawler or in digital books is already outdated and effectively worthless.
I've discussed with my therapist many times that my biggest mental health challenges are from the exact same personality traits that bring me my greatest joy and value. Every maladaptive trait has its adaptive aspects and vice versa. If I were to try to eliminate those maladaptive aspects, I'd probably lose much of the adaptive side as well.
There is a real zen to being able to note that the things which cause the most anguish also cause the most joy and accepting both sides of that coin at the same time.
reply