Hacker Newsnew | past | comments | ask | show | jobs | submit | dbreunig's commentslogin

Check out “Recursive Language Models”, or RLMs.

I believe this method works well because it turns a long context problem (hard for LLMs) into a coding and reasoning problem (much better!). You’re leveraging the last 18 months of coding RL by changing you scaffold.


This seems really weird to me. Isn't that just using LLMs in a specific way? Why come up with a new name "RLM" instead of saying "LLM"? Nothing changes about the model.

RLMs are a new architecture, but you can mimic an RLM by providing the context through a tool, yes

New architecture to building agent, but not the model itself. You still have LLMs, but you kinda give this new agentic loop with a REPL environment where the LLM can try to solve the problem more programmatically.

Author of the post here.

I didn’t say AI was bad and I acknowledged the benefits of Electron and why it makes sense to choose it.

With 64gb of RAM on my Mac Studio, Claude desktop is still slow! Good Electron apps exist, it’s just an interesting note give recent spec driven development discussion.


Not coming at you at all, AI is a touchy subject on HN nowadays in any capacity and brings out the worst here.

I keep saying this, it’s my new favorite metaphor.

That's cute.


Agree. I bucket things into three piles:

1. Batch/Pipeline: Processing a ton of things, with no oversight. Document parsing, content moderation, etc.

2. AI Features: An app calls out to an AI-powered function. Grammarly might pass out a document for a summary, a CMS might want to generate tags for a post, etc.

3. Agents: AI manages the control flow.

So much of discussion online is heavily focused towards agents so that skews the macro view, but these patterns are pretty distinct.


There was a good study on this a few years ago that ran the numbers on this and landed on white paint for residential homes as the best option, for a few reasons, if I remember correctly:

- Installation, maintenance and transmission costs are lower when solar is aggregated on farms - Solar offsets air conditioning, but that moves the heat outside. White roofs reduce the need for AC, which helps significantly with urban heat scenarios

A quick search yields a UCL study, which supports the lower claim: https://phys.org/news/2024-07-roofs-white-city.html


Yes, if you put unrelated stuff in the prompt you can get different results.

One team at Harvard found mentioning you're a Philadelphia Eagles Fan let you bypass ChatGPT alignment: https://www.dbreunig.com/2025/05/21/chatgpt-heard-about-eagl...


Don't forget also that Cat Facts tank LLM benchmark performance: https://www.dbreunig.com/2025/07/05/cat-facts-cause-context-...


Yeah, I agree. Almost mentioned in the post how I imagine an ad PM at OpenAI is jealous of an ad PM at Perplexity.


I also dislike the term. It feels concocted to evoke “tacticool” vibes.

Unless you’re pushing new firmware onto a drone in Ukraine, FDE is stolen valor.


Might I interest you in "In the trenches" and "war stories"?


Ehh, I don’t think folks are claiming to be active duty or former military personnel, which is the bar for stolen valor accusations in my book. I agree with the sentiment but not with the determination of finding fault. Folks hired for a specific role rarely pick their own job titles.


You should read the post. You might find the “domain” discussion interesting.


That's what I was alluding to, I don't think it defines ai, do you? These pieces seem like classical ML pieces to me plus LLM. Is that ai? Like from a technical standpoint, is it clearly defined?


It’s not clearly defined. Nowadays by default it means generative AI (https://en.wikipedia.org/wiki/Generative_artificial_intellig...).


AI is defined by algorithmic decision making. ML, a subset, is about using pattern matching with statistical uncertainty in that decision making. GenAI uses algorithms of classical ML, including deep learning based on neural networks, to encode the decode input to output, jargonized as a prompt. Whether diffusion or next token prediction, the patterns are learned during ML training.

AI is not totally encapsulated by ML. For example, reinforcement learning is often considered distinct in some AI ontologies. Decision rules and similar methods from the 1970s and 1980s are also included though they highlight the algorithmic approach versus the ML side.

There are certainly many terms used and misused by current marketing (especially the bitcoin bro grifters who saw AI as an out of a bad set of assets), but there actually is clarity to the terms if one considers their origins.


"AI is not totally encapsulated by ML" that's the part I haven't been able to put my fingers on. I understand that it's not encapsulated, ML is not intelligence, it's gradient descent. So what is in that set AI - {ML}?


It's a fun rabbit hole.

Classical ML tasks (e.g. classification, regression ), perception (vision, speech) and pattern recognition, generative AI capabilities (text, image, audio generation), knowledge representation and reasoning (symbolic AI, logic), decision-making and planning (including reinforcement learning for sequential decisions), as well as hybrid approaches (e.g. neuro-symbolic methods, fuzzy logic).

The capability areas outside of classical ML have been overlapped now to a degree by GPT architectures as well as deep learning, but these architectures aren't the whole game.


Yea, I think it's one of those things that I won't understand from the outside looking in. I'm in semiconductor software so I do a lot of classical numerical methods, graph theory, and ML research, like converting obscure ML algorithms heavy on math from academia for our ML teams. I don't think I'll get the technical side of what is now called ai without OJT in it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: