Hacker News new | past | comments | ask | show | jobs | submit login

Although tracking it would help, I see 2 issues (in opposite directions): I think there are times when slower performance is an acceptable cost. And, I think that if you allow tiny slowdowns, over time we'll get back here. There's judgment involved.



Even just printing the metrics on every pull request, and not gating on them, would make people think about it more.

It would be expensive in terms of CI hours, but at least for the community as a whole it would probably be worth it on something run as often as a compiler.


You could make the argument that:

- a slower build time should not be made at the expense of a more extensible compiler - one that can be modified easily to add capabilities and features to the build output

- a slower build time is acceptable if the build result executes faster or more efficiently. One slower compile vs one million faster executions is keeping your eye on the prize.


The argument is simple IMO:

* release target build times aren't an issue. They can be done overnight and aren't part of the work cycle.

* un-optimized build times are part of the work cycle and should be as speedy as possible.


> * release target build times aren't an issue. They can be done overnight and aren't part of the work cycle.

Emphasis added. This isn't true for many use cases. There are times when release build + single run is faster than debug build because run time is relatively long (e.g. scientific sims with small code bases + big loops). There are times when debug builds simply aren't sufficient (e.g. when optimizing code).


OK that's true but I think my point still stands. Someone who is doing very heavy scientific computation with long-run times will still prefer a release build that is optimizing for run-time speedup over compile-time speedup, within reason of course.


I agree the point still largely stands, that's why I added the emphasis. Maybe I should have made the intent of that clearer.


I was going to counter that doesn't WebKit use LLVM to compile hot JavaScript code paths, but it turns out that they have already replaced it with a bespoke optimizer with faster compilation times. https://webkit.org/blog/5852/introducing-the-b3-jit-compiler...


This all depends on build scenario/configuration.

-O3 - take all the time you need, -O2 and below - please don't regress the performance of the compiler - as article states - we are happy enough with the level of output we currently get.


Yes, exactly, it's a question of what you ask from the compiler.

Not sure whether the -O2 and -O3 system is the exact right choice to communicate that. But any better system would also preserve the user's ability to make this trade-off.

Though I don't think there's necessarily anything magic about -O2. They could conceivably also only protect -O0 and perhaps -O1. Or give finer grained control over the trade-offs.


-O2 is often the only usable output level in some scenarios even during development so it's important to keep it usable in development iterations, -O3 is "I don't care about compile time"


In that case, landing features decreasing benchmark performance should be accompanied by a corresponding performance increase. The WebKit project has done this for nearly 20 years.


I think its mostly a false dichotomy * I think there are times when slower performance is an acceptable cost *

we should significantly slow down compile time for any non negligeable win of runtime performance BUT this slow down should only be incurred in optimised builds (>= 01) and have no side effect on debug compile time which should be fast and it almost does not matter if they get slower at runtime.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: