Hacker News new | past | comments | ask | show | jobs | submit login

I'm more wondering when we'll have algorithms that will "do their best" given the resources they detect.

That would be what I call artificial intelligence.

Giving up because "out of memory" is not intelligence.




I suppose you could simulate dementia by loading as much of the weights as space permits and then just stopping. Then during inference, replace the missing weights with calls to random(). I'd actually be interested in seeing the results.


No but some model serving tools like llama.cpp do their best. It's just a matter of choosing the right serving tools. And I am not sure LLMs could not optimize their memory layout. Why not? Just let them play with this and learn. You can do pretty amazing things with evolutionary methods where the LLMs are the mutation operator. You evolve a population of solutions. (https://arxiv.org/abs/2206.08896)


>Giving up because "out of memory" is not intelligence.

When people can't remember the facts/theory/formulas needed to answer some test question, or can't memorize some complicated information because it's too much, they usually give up too.

So, giving up because of "out of memory" sure sounds like intelligence to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: