If you Google "modern c++" you will probably find ~c++11 tutorials and posts and less and less content on the latest c++ standards where some things considered "modern" a few years ago are already deprecated or not considered "best practices" anymore.
I'd check this[0] excellent mega rant about c++ and take the best parts to create a truly "contemporary c++" cheatsheet.
Thank you for sharing this excellent resource! You make a great point—searching for "modern C++" often surfaces C++11-era content, while newer standards have already deprecated some of those "modern" practices. That's exactly why I created this project: to continuously update and document contemporary best practices as the language evolves. I'll definitely check out the linked rant for ideas to incorporate. Thanks for the suggestion!
Ha, almost every "New Business Idea & Product Name" is heavily focused on AI. It tracks. Everytime I present a problem to an LLM the solution is the same "use AI". Can't blame them though.
"But as Deepak Chopra taught us, quantum physics means anything can happen at any time for no reason. Also, eat plenty of oatmeal, and animals never had a war! Who's the real animals?" -Professor Hubert Farnsworth
>Alternativa Estudiantil
Alternativa Estudiantil is a Spanish patriotic student movement founded in September 2023 to counter perceived left-wing and woke dominance in universities, positioning itself as a conservative alternative emphasizing national identity and meritocracy.[1]
>I sure hope that more people start wanting to bet on brighter tomorrows and actually put some skin in the game.
I'm optimistic about the future but I can't see how's that related to have children. In any case, having children is putting someone else skin in the game, not mine.
>That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness");
What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.
Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.
Why do people still think software have any agency at all?
Plants don't "want" or "think" or "feel" but we still use those words to describe the very real motivations that drive the plant's behavior and growth.
Criticizing anthropomorphic language is lazy, unconsidered, and juvenile. You can't string together a legitimate complaint so you're just picking at the top level 'easy' feature to sound important and informed.
Everybody knows LLMs are not alive and don't think, feel, want. You have not made a grand discovery that recontextualuzes all of human experience. You're pointing at a conversation everyone else has had a million times and feeling important about it.
We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky and obnoxious in everyday conversation.
The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive. You should reflect on that question.
> Because LLMs now not only help me program, I’m starting to rethink my relationship to those machines. I increasingly find it harder not to create parasocial bonds with some of the tools I use. [...] I have tried to train myself for two years, to think of these models as mere token tumblers, but that reductive view does not work for me any longer.
> Criticizing anthropomorphic language is lazy, unconsidered, and juvenile.
To the contrary, it's one of the most important criticisms against AI (and its masters). The same criticism applies to a broader set of topics, too, of course; for example, evolution.
What you are missing is that the human experience is determined by meaning. Anthropomorphic language about, and by, AI, attacks the core belief that human language use is attached to meaning, one way or another.
> Everybody knows LLMs are not alive and don't think, feel, want.
What you are missing is that this stuff works way more deeply than "knowing". Have you heard of body language, meta-language? When you open ChatGPT, the fine print at the bottom says, "AI chatbot", but the large print at the top says, "How can I help?", "Where should we begin?", "What’s on your mind today?"
Can't you see what a fucking LIE this is?
> We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky
Not at all. What you call "clunky" in fact exposes crucially important details; details that make the whole difference between a human, and a machine that talks like a human.
People who use that kind of language are either sloppy, or genuinely dishonest, or underestimate the intellect of their audience.
> The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive.
Because people have committed suicide due to being enabled and encouraged by software talking like a sympathetic human?
Because people in our direct circles show unmistakeable signs that they believe -- don't "think", but believe -- that AI is alive? "I've asked ChatGPT recently what the meaning of marriage is." Actual sentence I've heard.
Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?
> Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?
And to think they dont even have ad-driven business models yet
reply