Hacker Newsnew | past | comments | ask | show | jobs | submit | lifecodes's commentslogin

MANAGED AGENTS sounds like progress, but also like we’re standardizing around the current limitations instead of solving them.

I guess it would be much cheaper to attach an api version of everything we developed till now than teaching these ai to be able to control things in real world as humans do..

I mean if we see the cost of training then making more apis for everything we have makes sense to me.

what do u think?


I think the big thing I was trying to highlight in this article was the fact the not much effort has been put into spatial and image awareness. In my limited experiments where I would manually ask the models to take an image and highlight things (like "circle all elbows") it does a great job... but if you ask the model where an elbow is in the image (in pixels), it does a poor job.

Or maybe put another way, going from `image->model->tool` seems to be an area for improvement.


I guess we are reaching the point where “10T parsmeters” sounds more like a marketing number than a meaningful metric.

Between moE, aggressive quantization, and synthetic data pipelines, it’s getting harder to tell whether bigger models are actually better, or just more expensive to train.

Would be more interesting to see -> capability per dollar or per watt, not parameter count...


If this holds, does it unlock 100B+ models running locally in ~tens of GB RAM? Or does accuracy collapse before that point?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: