Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From the Carp docs:

Memory management is handled by static analysis, a value is owned by the function where it was created. When a value is returned or passed to another function the initial function will give up ownership of it and any subsequent use will lead to a compiler error. To temporarily lend a value to another function (for example to print it) a reference must be created, using the ref special form (or the & reader macro).

and

achieved through a linear type system where memory is owned by the function or let-scope that allocated it. When the scope is exited the memory is deleted, unless it was returned to its outer scope or handed off to another function (passed as an argument).



So how is memory fragmentation avoided?


That's a question for the underlying allocator, surely?

(Quite often the answer is "it isn't")


Not just for the allocator. I always thought a main point of a garbage collector was heap compactification (shuffling things around so there is more space), but maybe I am wrong.


Nah, only copying / generational collectors do heap compactification. A simple Mark&Sweep collector doesn't, for example. Nor does reference counting. Both of which are used by many Lisp or Lisp-like languages.

Nothing can substitute for a really good allocator.


So what happens to long running lisp programs? Running out of memory due to fragmentation eventually?


One would expect at least a provision for a stop the world phase. where all mutator threads get to wait for a massive defragmentation.


Not every GC has compaction phase though, but generational ones do by design.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: