Hacker News new | past | comments | ask | show | jobs | submit login

One thing I do not understand is if there is no GC then how are stack allocated variables freed?in vanilla go there is no way to manually manage variable lifetime. Does this mean everything needs to be created with a a make?



Currently pcz is building applications with the `compiling-runtime` option, so variables implicitly escaped from the stack will cause compile error.

For variables you are sure living on the stack, you can use `mark.NoEscape(&v)` to obtain its pointer (a hack to cheat escape analysis found in the official runtime & reflect package), and the variable will be freed on function return.

For variables you want to control lifecycle, call `alloc.New` or `alloc.Make` with a explicit allocator, which can be an allocator whose storage is on the previous call stack (and there is a default goroutine local allocator thread.G().G().DefaultAlloc()) to allocate a non-stack-local variable, and call `alloc.Free` with the same allocator to free that variable.

The use of `make` is discouraged as go compiler does mid-stack inlining and escape analysis, so the return value of a `make` can be either on stack or on heap, thus to free it, the allocator needs to compare the address to thread.G().Stack.Lo and thread.G().Stack.Hi.


just restart the server/app on a cron job


I have worked with a number of codebases where we did that.

In fact, I have worked with a number of codebases where I was the one who deployed that, because the app process could already do the relevant level of restart without dropping a request, and the leak was slow enough and hard enough to find that I couldn't justify keeping going until I -did- find it.

This continues to annoy me, but having made sure the leakiness wasn't able to -increase- over time without us spotting it immediately, I'm still convinced it was the business-correct decision.

(but, still, sulk)


I once worked on a project where a critical system started to leak memory during a critical weekend-long event. We basically took shifts every 4 hours to log into the boxes and manually rolling-restart the affected services.

We did find and fix the leak the following week once the event completed and traffic dropped to normal levels. But sometimes, you got to do what you got to do to keep the app running.


I was more thinking of some mad lads who turned off python garbage collector and just restarted the process when running out of memory. Apparently it gave huge performance and cost boost




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: