I believe this method works well because it turns a long context problem (hard for LLMs) into a coding and reasoning problem (much better!). You’re leveraging the last 18 months of coding RL by changing you scaffold.
This seems really weird to me. Isn't that just using LLMs in a specific way? Why come up with a new name "RLM" instead of saying "LLM"? Nothing changes about the model.
New architecture to building agent, but not the model itself. You still have LLMs, but you kinda give this new agentic loop with a REPL environment where the LLM can try to solve the problem more programmatically.
I didn’t say AI was bad and I acknowledged the benefits of Electron and why it makes sense to choose it.
With 64gb of RAM on my Mac Studio, Claude desktop is still slow! Good Electron apps exist, it’s just an interesting note give recent spec driven development discussion.
1. Batch/Pipeline: Processing a ton of things, with no oversight. Document parsing, content moderation, etc.
2. AI Features: An app calls out to an AI-powered function. Grammarly might pass out a document for a summary, a CMS might want to generate tags for a post, etc.
3. Agents: AI manages the control flow.
So much of discussion online is heavily focused towards agents so that skews the macro view, but these patterns are pretty distinct.
There was a good study on this a few years ago that ran the numbers on this and landed on white paint for residential homes as the best option, for a few reasons, if I remember correctly:
- Installation, maintenance and transmission costs are lower when solar is aggregated on farms
- Solar offsets air conditioning, but that moves the heat outside. White roofs reduce the need for AC, which helps significantly with urban heat scenarios
Ehh, I don’t think folks are claiming to be active duty or former military personnel, which is the bar for stolen valor accusations in my book. I agree with the sentiment but not with the determination of finding fault. Folks hired for a specific role rarely pick their own job titles.
That's what I was alluding to, I don't think it defines ai, do you? These pieces seem like classical ML pieces to me plus LLM. Is that ai? Like from a technical standpoint, is it clearly defined?
AI is defined by algorithmic decision making. ML, a subset, is about using pattern matching with statistical uncertainty in that decision making. GenAI uses algorithms of classical ML, including deep learning based on neural networks, to encode the decode input to output, jargonized as a prompt. Whether diffusion or next token prediction, the patterns are learned during ML training.
AI is not totally encapsulated by ML. For example, reinforcement learning is often considered distinct in some AI ontologies. Decision rules and similar methods from the 1970s and 1980s are also included though they highlight the algorithmic approach versus the ML side.
There are certainly many terms used and misused by current marketing (especially the bitcoin bro grifters who saw AI as an out of a bad set of assets), but there actually is clarity to the terms if one considers their origins.
"AI is not totally encapsulated by ML" that's the part I haven't been able to put my fingers on. I understand that it's not encapsulated, ML is not intelligence, it's gradient descent. So what is in that set AI - {ML}?
Classical ML tasks (e.g. classification, regression
), perception (vision, speech) and pattern recognition, generative AI capabilities (text, image, audio generation), knowledge representation and reasoning (symbolic AI, logic), decision-making and planning (including reinforcement learning for sequential decisions), as well as hybrid approaches (e.g. neuro-symbolic methods, fuzzy logic).
The capability areas outside of classical ML have been overlapped now to a degree by GPT architectures as well as deep learning, but these architectures aren't the whole game.
Yea, I think it's one of those things that I won't understand from the outside looking in. I'm in semiconductor software so I do a lot of classical numerical methods, graph theory, and ML research, like converting obscure ML algorithms heavy on math from academia for our ML teams. I don't think I'll get the technical side of what is now called ai without OJT in it.
I believe this method works well because it turns a long context problem (hard for LLMs) into a coding and reasoning problem (much better!). You’re leveraging the last 18 months of coding RL by changing you scaffold.
reply