I agree somewhat with you - nonetheless a FastAPI + Alembic + SQLAlchemy alternative in R would make it possible to use it as a general purpose language
data layer > business logic layer > presentation layer
I believe the presentation/analytics layer has become malleable, possibly parts of the business logic layer - you still need a higher % of trustworthiness than LLMs can provide for parts of the business and data layers.
> you still need a higher % of trustworthiness than LLMs can provide for parts of the business and data layers
For many domain-heavy systems, it's not even the trustworthiness; just getting the business logic right requires a lot of work and lots of iterations with in-house domain experts and clients, there's no way LLMs can do that.
This is the current sentiment. But it is short sighted.
The best recommendation is to _know_ the fundamentals of house prices. To know when buying is cheap and expensive.
Eg. in relative terms: buying a house at 30 Price/Rent makes it more affordable to rent - in such an environment, just rent. If the P/R falls to 15-20, then buy.
Housing can also be unaffordable in absolute terms such as wanting to live in down town San Fransisco. In this case people should strongly consider if they want to pay a premium for that locality.
We don't have to go longer back than 2013 to when it made sense to buy over renting - and that will return at some point.
It's your choice to think about such things and therefore it's your choice to be unhappy about it. You can't change the past so you can either be unhappy about it, or not. It's your choice.
Any reason to upgrade an M2 16GB macbook to a M4 ..GB (or 2026 M5) for local LLMs? Due an upgrade soon and perhaps it is educational to run these things more easily locally?
For LLMs, VRAM is the requirement number one. Since MacBooks have unified RAM you can use up to 75% for the LLM, so a higher RAM model would open more possibilies, but these are much more expensive (of course).
As an alternative you might consider a Ryzen Pro 395+ like in the Framework desktop or HP Zbook G1a but the 128GB versions are still extremely expensive. The Asus Flow Z13 is a tablet with ryzen 395+ but hardly available with 128GB
I did just that , got the r 32gb ram one so I could run qwen.
Might still be early days I’m trying to use the model to sort my local notes but I don’t know man seems only a little faster yet still unusable and I downloaded the lighter qwen model as recommended.
Again it’s early days maybe I’m being an idiot I did manage to get it to parse one note after about 15 mins though.
gpt-oss-20b eats too much ram to use for anything other than an overnight task. maybe 3tok/s.
Been playing around with the 8b versions of qwen and deepseek. Seems usable so far. YMMV, i'm just messing around in chat at the moment, haven't really had it do any tasks for me
How close are we to building our own digital Commander Data with the current iteration of voice based agents? "In progress" according to the website - mind blowing that this is current reality.
Cool website. Would love to subscribe to status changes via email.
Extremely far. Not very far from the ship computer. It doesn’t seem sentient but it has a voice interface to all operational data/stats plus a snapshot of all known knowledge at the time of departure from space port.
When dealing with LLMs and prompt engineering to get CAI to do what I needed to do, I am reminded of the scene from TNG where Geordi is continuously rephrasing and readjusting his requests for the computer in the holodeck (especially around 3:30 timestamp):
I had not considered the ship computer, indeed feels more close to current progress. Having said that, I started rewatching TNG recently and it is quite fascinating to compare Data to the current voice models.
It's hinted at that Data has a voice model as subsystem. :) He allegedly can't use contractions IIRC. (Then of course he does eventually, because what actor could keep track of that, but anyhow.)
reply