Hacker News new | past | comments | ask | show | jobs | submit login

At GitLab we recommend to use CPU cores + 1 as the number of unicorn workers https://docs.gitlab.com/ce/install/requirements.html#unicorn...



How do you configure that? A pod doesn't know what machine it's running on ahead of time. You can create nodepools and use node selectors to pin the pod to that nodepool, but I'm not sure I love the idea.


Our entrypoint configures the unicorn workers before starting using a chef omnibus call. So we grab the memory and cpus using Ohai: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/fil... and do cpus+1

^This is pretty much we do by default for all our regular package installs, but in some of our Kubernetes Helm charts we instead statically configure the pod resources and unicorn workers. (Defaulting to 1cpu - 2workers per front-end pod) eg: https://gitlab.com/charts/charts.gitlab.io/blob/master/chart...

As someone mentioned in this thread, using the downward api might be a cool way to configure the workers.


Using the downward API, your application can get the number of CPU cores or the millicore value via environment variables or volumes. This would let you configure your number of workers based on what resource limits you set on your container.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: