Hacker News new | past | comments | ask | show | jobs | submit login

Compilation units are a relic of a time where computers only had a few KB of memory. At this point computers are fast enough and have enough memory to compile the whole thing in one go faster than whatever gains doing change detection and linking will have.



This, is just deeply untrue. Do you really think everybody working on compilers and linkers are deeply ignorant? I can easily saturate my 64 GB RAM home setup during a compile.


While everyone else is (rightfully) correcting you, I am curious what sort of codebases you’re working with?

Are you working on large compiled software? Any game, rendering engine or large application benefits from compilation units in my experience.

Some of my libraries that I work with take upwards of an hour for a fresh compile. Having sane compilation units cuts down subsequent iteration to minutes instead.


Yeah, no. To this day Firefox developers building Gecko need a beefy desktop machine to be able to do it in a reasonable amount of time. I could do a clean build in 6 minutes with a ThreadRipper whose cores were all pegged, but forget doing the same in under an hour on a laptop.

And that was with unified builds enabled.


And more importantly, on that same machine the build would take more than ten minutes without unified builds.


Hahaha tell that to my Unreal Engine build times.

A brand new AMD Epyc, 64 core machine, will take over an hour to compile. Good times.


I’d really like to see a comparison someday between Epics weird C# based build system and something like CMake+Ninja.

I suspect there’s compilation optimizations to be made, but I don’t think it would save more than 30% here and there.


> I suspect there’s compilation optimizations to be made

There definitely are. I've spent a lot of time with UBT, and a "reasonable" amount of time with cmake and friends. UBT isn't quite the same as CMake + Ninja. UBT does "adaptive" unity builds, globbing, and a couple of other things.

> but I don’t think it would save more than 30% here and there.

Agreed. The clean build with UBT is painfully slow compared to Cmake + Ninja, but the full builds themselves are pretty good, and I'd bet that there's probably less low hanging fruit there.

I did a good chunk of work on improving compile times in Unreal, and there is definitely just low hanging fruit in the engine for improving compile times. Some changes to how UHT works around forward declares would also make a significant difference too.


The big issue, in addition to speed, I had with UBT was how difficult it was to debug when it did the wrong thing. Often this was when having to adopt new Xcode versions, where CMake gave a lot of escape hatches to adapt it whereas UBT required spelunking.

At some points, there’s multiple layers of historic cruft that just seem arcane.

Last year, epic released a video where an engineer went through it and even they hit points where they said: “I have no idea what this area of code does”


No disagreements there. Spelunking is a great word for it, but spelunking is a requirement for most "deep" unreal engine development. On the other hand, its incredibly empowering to switch your ide to build UnrealBuildTool, and put "My project Development Win64" as the arguments and be able to debug the build tool there and then to see what it's actually doing.


That’s true. I should give it a go again now that Rider is available. It’s been a huge QoL improvement in the rest of my Unreal/Unity development work.


I would as well! It’s honestly a bit beyond me, the Unreal build tools run deep, so I imagine it would take some effort.


Why do clean builds of my code take like 30m then?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: