Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Build time is a collective responsibility (yoyo-code.com)
29 points by MrJohz on March 24, 2024 | hide | past | favorite | 8 comments


In our repository, I've set up a few hard limits: each translation unit cannot spend more than a certain amount of memory for compilation and a certain amount of CPU time, and the compiled binary has to be not larger than a certain size.

When these limits are reached, the CI stops working, and we have to remove the bloat: https://github.com/ClickHouse/ClickHouse/issues/61121

Although these limits are too generous as of today: for example, the maximum CPU time to compile a translation unit is set to 1000 seconds, and the memory limit is 5 GB, which is ridiculously high.


Builds will always slow to the point of being intolerable. The eventual build time is determined by what intolerable is, which will vary by team.

You see companies go through periodic efforts to improve their build. The time slipped past the intolerable point, they brought it back to tolerable, and the cycle repeats.

The author's real issue is that their tolerance of a slow build is lower than their peers. Sort of like someone that has stricter expectations of cleanliness than their roommates.


I'm a Dev with DevOps skills. In 100% of the projects I've been on, the feature-focused Devs have a strong bias against thinking about or fixing the CI/CD pipeline. This despite it being simple code and directly accessible.

Build time, like other sorts of feedback loops is critical for team level success. This means the team should measure and review build time, and invest in it when needed. CI/CD doesn't magically fix itself.


My first response when I saw "collective responsibility" was "So no one is going to do it and everyone is going to diffuse the responsibility to some greater force they have no control over".


An easy way to speed up CI/CD IMO is to build your own runners with all of your tools pre-installed (directly, from a cache, or a container). With GitHub Actions for instance, every job starts on a fresh machine, and you get penalized for parallelizing when every machine has to spend 1 minutes+ collecting dependencies. Ideally jobs should spin up in a clean environment with immediate access to tooling.


I've just made a ShowHN [1] about my pet project [2] for the past few weeks. It let you run isolated runners on-prem from any OCI image using microVMs. The project uses a base image to form a minimal runner environment, and an image that layers the tooling required for running the runner on top of the base image. You can insert your own set of pre-installed tools in that chain of images to warm up runners quickly.

[1]: https://news.ycombinator.com/item?id=39844932 [2]: https://github.com/efrecon/gh-runner-krunvm


the article is about the edit-compile-edit loop during development, which has absolutely nothing to do with CI/CD at all

If you are pushing your code elsewhere and then waiting on CI to test single line changes that you make during development, you're doing it wrong, and the problem with that would be an obsession with CI/CD running all tests instead of allowing for developers to build and run tests locally.


This article reminded me of a seminal C++ book on this subject:

Lakos, John (1996). Large-Scale C++ Software Design. Addison-Wesley.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: