Most code is executed a lot more frequently than it is compiled, so if I can get a 1% speed increase with a 100x compile slowdown, I'll take it.
I don't want to see good PR's that improve LLVM delayed simply because they cause a speed regression.
clang -Xclang -load -Xclang libsouperPass.so -mllvm -z3-path=/usr/bin/z3
LLVM is super configurable, so you can make it do whatever you want.
Clang defaults are tuned for the optimizations that give you the most bang for the time you put in, while still being able to compile a Web browser like Chrome or Firefox, or a whole Linux distribution packages, in a reasonable amount of time.
If you don't care about how long compile-times take, then you are not a "target" clang user, but you can just pass clang extra arguments like those mentioned above, or even fork it to add your own -Oeternity option that takes in your project and tries to compile it for a millenia on a super computer for that little extra 0.00001% reduction in code size, at best.
Because often, code compiled with -O3 is slower than code compiled with -O2. Because "more optimizations" do not necessarily mean "faster code" like you seem tobe suggesting.
Indeed, due to patterns of resource use and availability (e.g. the memory wall), compiler optimisations can only solve 1-10% of performance problems a piece of code might have. To paraphrase Mike Acton: The vast majority of your code consists of things the compiler can't reason about.
: [Mike Acton, "How to Write Code the Compiler Can Actually Optimize", GDC2015.](https://m.youtube.com/watch?feature=youtu.be&v=x61H6qEtK08&t...)
I can't imagine anything useful coming out of it.
...even if the actual debugging would be much easier with a debug build.
I work on seasonal programs. I can only do fully realistic tests in the month of April (depending on the weather it shifts a few weeks) , meaning my window to test is closed for the next 11 months. I have learned to find ways to run my code in less realistic situations.
That one is caught be a warning, but there are thousands of possible errors that can be exposed this way.
Compilers have bugs all the time, but code in a write-test-debug-cycle typically has many more.
-O0, -O1, -O2, -O3, -Ofast, -Os, -Oz, -Og, -O, -O4
Specify which optimization level to use:
-O0 Means “no optimization”: this level compiles the fastest and generates the most debuggable code.
-O1 Somewhere between -O0 and -O2.
-O2 Moderate level of optimization which enables most optimizations.
-O3 Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
-Ofast Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards.
-Os Like -O2 with extra optimizations to reduce code size.
-Oz Like -Os (and thus -O2), but reduces code size further.
-Og Like -O1. In future versions, this option might disable different optimizations in order to improve debuggability.
-O Equivalent to -O2.
-O4 and higher
Currently equivalent to -O3
As for the long baking compile, you have a concept of super compilation which will check the branches and determine if there are any places where a function is called with constants and partially evaluate (similar to partial application / prameter binding, but you get a newly compiled function) the function with those constants frozen. But then it has to determine if the branch elision makes it worth dropping the other versions of the function. It's a cool topic that I researched a lot about a decade ago but I think it's not an area with a lot of active interest in AOT compilers.
Is that really true? I'd have thought most code outside inner loops benefits almost negligibly from optimization.
There are some optimizations that make a big difference everywhere:
- consolidating pure function calls. This can save arbitrary amounts of time. Classic example is strlen being called many times. I know you're not asking about inner loops, but this can change the complexity of loops which is a big big win.
- mem2reg. Kind of self explanatory. Registers are fast but we write C and C++ using addressable objects. Most compilers make a decent effort here even with optimizations turned off.
- global variable numbering. This allows loads and stores to be removed/moved. Often prevents cache misses or puts them off until the end of a function when it doesn't block execution.
- strength reduction. Turning your divisions into shifts. Turning your && into &. Etc. It is not unusual for these peephole changes to save 10s of cycles per instruction replaced.
These are also really fast optimizations (mem2reg can be extremely slow if "optimal", but the heuristic versions that everyone uses are quick).
If you know you won't care, you can mark the function as cold. That said, the compiler might ignore you and decide you cannot possibly really want to disable something like mem2reg.
Besides, little things add up even outside of tight loops. Slow code in rarely used areas is still noticeable when that rare thing happens.
Let's set aside thecnicalities and assume it a real X5 improvements and all the files are mirrored seamlessly.
There are open-source ones but I'm also aware of at least two internal, custom-developed ones.
I could see this having some use in an internal manner for things like game development where you have absolutely enormous codebases (as an abstraction above compile servers and similar).
In many companies the build cluster runs on the developers' workstations themselves, which has the benefit of fully using idling machines. The drawback is higher maintenance due to less reliability of such machines.
Would your company accept internal hosting for such a cluster, i.e. paying for the hardware themselves?
Their compiler did optimizations to produce (slightly) better code than any compiler you could buy.
A lot of users were uneasy with it.
OpenJ9 for Java can use a cloud compiler,
.NET when AOT compiled for Windows Store uses cloud compilers, https://blogs.windows.com/windowsdeveloper/2015/08/20/net-na...
And as of Android 10, the cloud compiler is everyone's phone, because PGO data gets uploaded to the play store and improved on each device feedback.
As the author of one of the changes which could have unknowingly causing a 1% regression, I really appreciate this work measuring and monitoring compile times.
Thanks to nikic for noticing the regression and finding a solution to avoid it.
But I guess the LLVM project should probably start by making code-reviews mandatory, gating PRs on passing tests so that master doesn't get broken all the time, etc. I really hate it when I update my LLVM locally from git master, and it won't even build because somebody pushed to master without even testing that their changes compile...
For Rust, I hope Cranelift really takes off someday, and we can start to completely ditch LLVM and make it opt-in, only for those cases in which you are willing to trade-off huge compile-times for that last 1% run-time reduction.
rustc's CI doesn't prevent merging PRs that impact performance: while the reviewer can request benchmarks beforehand and choose not to approve the PR if it introduces a regression, all other benchmarks are run after the commits are merged to master.
It is incredible how much extra hardware can make development easier. I find it so funny that our main instinct to speeding up a projects development is to throw people at projects (thus the "Mythical Man Month" book), when in reality you should be throwing hardware at it, and probably extra testing.
> For Rust, I hope Cranelift really takes off someday, and we can start to completely ditch LLVM and make it opt-in, only for those cases in which you are willing to trade-off huge compile-times for that last 1% run-time reduction.
I mean, that'd be nice, but I definitely don't see that happening anytime in the next 5 years. The current plan is for it to be for debug only, and due to the IR rust emits, the gap between debug and release can be huge.
> Waymarking was previously employed to avoid explicitly storing the user (or “parent”) corresponding to a use. Instead, the position of the user was encoded in the alignment bits of the use-list pointers (across multiple pointers). This was a space-time tradeoff and reportedly resulted in major memory usage reduction when it was originally introduced. Nowadays, the memory usage saving appears to be much smaller, resulting in the removal of this mechanism. (The cynic in me thinks that the impact is lower now, because everything else uses much more memory.)
Any seasoned programmers would remember a few of such things - you undo a decision made years ago because the assumptions have changed.
Programmers often make these kinds of trade-off choices based on the current state (typical machines the program runs, and typical inputs the program deals with, and the current version of everything else in the program). But all of those environmental factors change over time, which can make the input to the trade-off quite different. Yet, it's difficult to revisit all those decisions systematically as they require too much human analysis. If we can encode those trade-offs in the code itself in a form that's accessible to programmatic API, one can imagine implementing a machine learning system that can make those trade-off decisions automatically over time as everything else changes via traversing the search space of all those parameters. The programming language of today doesn't allow encoding such a high-level semantic unfortunately, but maybe it's possible to start small - e.g. which of the associative data structure to use can be chosen relatively easily, the initial size of datastructure can also be potentially chosen automatically based on some benchmarks or even metric from the real world metric, etc.
When you say state space, I think about what is dynamically changing. If you can select one of two design decisions e.g. at compile time then, yes, your state space is bigger, but you don't have to reason about the whole state space jointly. The decision isn't changing at run time.
For example, template parameters in C++. The STL defines map<K, V>. You don't have to test ever possible type of key and value.
Tooling, runtime sampling, or just code review could reveal when the assumptions go awry.
Phoronix.com has a lot of Clang benchmarks over the years.
I recall seeing some benchmark that showed that as Clang approached GCC in performance of compiled output, the compile speed also went down to approach GCC levels.
But I haven't managed to find that exact benchmark yet.
Pretty much the sole point of Clang/LLVM to the corporate sponsors is to get the GCC, but without GPL
And getting back to GCC, I had to study GIMPLE during my compiler assignments.
What LLVM has going for it versus GCC, is the license, specially beloved by embedded vendors and companies like Sony, Nintendo, SN Systems, CodePlay can save some bucks in compiler development.
Based on that, my understanding was that while intermediate representations were certainly not new, being strict about not mixing the layers was still quite rare. He specifically claims that GCC's GIMPLE is (was?) not a fully self-contained representation.
I'm not an expert in any of this. Just sharing the link.
The intermediate language was strongly stack oriented, to the extend that local CPU registers were mapped to stack locations. This worked well for pdp-11, vax, mc68k, and to some extent x86.
But when Sun's SPARC got popular it became clear that mapping stack register windows was not going to result in good performance.
One option would have been define a new register oriented intermediate language, just like llvm has now. But by that time research interests at the VU Amsterdam had shifted and this was never done.
Key points, using a memory safe systems programming language which apparently would be too slow for the target hardware, thanks to the compiler IL representation and multiple execution phases (sounds similar?) achieves the goal of being usable to write OS in 70's hardware.
The license is probably considered an advantage by many companies. However it is definitely not the only reason for LLVMs success. There are many technical reasons as well, e.g. cleaner code and architecture. My personal impression is that a lot of research and teaching has moved from GCC to LLVM as well, universities usually do not care that much about the license.
Yes, GCC has GIMPLE (and before that just RTL) but it is not as self-contained as LLVM's IR. In GCC front-end and middle-end are quite tangled on purpose for political reasons. Nevertheless I agree that LLVM isn't as revolutionary as the poster you are replying to is claiming, reusing an IR for multiple languages was done before. However I don't think any other system was as successful as LLVM at this. E.g. Rust, Swift, C/C++ via clang, Julia, Fortran, JITs like JSC/Azul JVM are/were using LLVM as a compilation tier, GPU drivers, etc. Those are all hugely successful projects and if you ask me this is an impressive list already while not even complete. It seems most new languages these days use LLVM under the hood (with Go being the exception). IMHO this is also because LLVM's design was flexible enough that it enabled all those widely different use cases. GCC supports multiple languages as well, but it never took off to the degree that LLVM did.
I don't know all the compilers you mentioned but how many of those were still maintained and available on the systems people cared about by the time LLVM got popular? Are those proper open-sorce projects?
No those projects aren't open source at all, they used their own compilers, or forked variants from GCC which you couldn't reveal thanks NDAs, now thanks to clang's license they have replaced their implementations, only contributing back what they feel relevant to open source.
Most (all?) GCC frontends compile to a common IR. The main difference really is that GCC doesn't market that as an interface for interacting with the compiler. In LLVM, the IR is the product, in GCC its the individual language compilers.
No, that's LLVM. Clang was Steve Naroff's baby after he stepped down from managing the team.
I think Objective-C++, basically an implementation of Objective-C that makes it possible to link with C++, must be from 2000 or later, but even a semi-exact date isn’t easy to find (probably in Clang’s release notes)
Even the link I provided, who knows for how long it will still stay up.
Plus in this case what has Clang to do with it given that we are talking about gcc?
Doesn't most of GCC's optimisation happen at the level of an internal IR?
It's good to have a bit of diversity and options, especially when they're all trying to be compatible.
Also has the added benefit of reducing the need to use compilers like Intel's (not sure on current benchmarks) but I really wouldn't want to ship something for AMD CPUs with Intel's compiler.
https://en.cppreference.com/w/cpp/compiler_support and embedded compilers, or more legacy like platforms aren't listed.
> It's good to have a bit of diversity and options
It can be consistent to hold that those two statements do not hold in this case. GCC requires that developers uphold certain user freedoms (i.e. compiler extensions must be free software). Clang allows user's freedoms to be more easily violated.
It's fine if you want to hold those two opinions, but if you state them as if they're the only opinions, that's not great. If you don't acknowledge that they're predicated on beliefs such as "user freedom isn't important to the expense of software improving in other ways" (or such), then of course you'll have trouble understanding why one might view clang/llvm in the negative light of effectively enabling "GCC, but without GPL".
So there was a real technical need. IIRC, there was also a personal need, as Steve wanted to do something else, and I am sure not being dependent on gcc was also a big deal.
From long experience, Apple doesn't like to be dependent on others for crucial bits of their tech stack. Relations with the gcc team weren't the best, even without the GPL issues, although the new GPL v3 was also seen as a problem. I think Apple switched before having to adopt v3.
Edit: that email from ESR cites a talk from an LLVM developer which goes into more detail about this argument and about a lot of the architectural differences between GCC and LLVM: https://www.youtube.com/watch?v=lqN15lrADlE. I'm not sure how up-to-date this is though, as it looks like it was recorded in 2012.
At least in the Apple ecosystem, the results of LLVM certainly spoke for themselves. Immediately after hiring Chris Lattner and adopting LLVM, Apple's developer experience began to improve massively in a very short time, coming out with automatic reference counting, Objective-C 2.0, the Clang static analyzer, live syntax checking and vastly improved code completion, bitcode, Metal shaders, and Swift within just a few years. Of course, I don't know how much of this was due to technical reasons rather than legal reasons (but Apple did make most of this work open-source).
As well, RMS no longer controls gcc, and has not since the late 90s when I engineered a fork of gcc from the FSF and into the hands of an independent steering committee. At the time such an idea was radical...thankfully it is now commonplace.
The reasons are:
- As somebody else mentioned, Apple redistributes developer tools, clang being the poster child
- Since they releases OS products, they don't want to co-mingle their software with GPL code. (So they use an older bash on Mac OS X.)
- fear of an Apple developer quietly copying GPL source into a commercial product (well-founded, actually)
- Apple Legal exerting an "abundance of caution" on IP
- at this point, it's institutional. When I worked there, Linux and MySQL were forbidden, for example, but that has relaxed recently.
Also, I think you misunderstand the GPL. If you distribute modified gcc, anybody receiving it can ask for sources. So employees plus end-users.
(One of the strangest examples is that Yamaha uses real-time linux in their synths, and you can download the GPL portions. I can't imagine a musician ever wanting to do that!)
Your synth example is a good one actually. As a synth owner, I'd probably love to be able to replace the software on it with modified versions from the Internet or modify it myself. Linux is GPLv2, though.
How that morphed into disallowing GCC, idk. Maybe they want to prohibit users from installing their own compilers at some point?
Others have mentioned the patent clause. That one seems reasonable as well. In fact LLVM uses the Apache 2.0 license which also has a patent grant, albeit with a smaller scope. Apple could probably file a patent for a feature, then get a university department to implement that feature, then sue other LLVM users (like Sony). With the GPLv3 that loophole does not exist.
Their actual marketplace behaviour demonstrates that they're allergic to GPL version 3 specifically, not the GPL or copyleft in general.
Even distribuiting GPL software isn't a big deal, nowadays even Microsoft does that, shipping an entire Linux distribution in Windows!
There are no technical motivations for not using GPL software, you can do that as long you respect the GPL license (i.e. release the modified source code).
I think that what Apple does is more a policy to go against the FOSS community for political reasons that anything else, and to me is bad, in a world where now even Microsoft is opening up a lot to the open source world.
> I think that what Apple does is more a policy to go against the FOSS community for political reasons that anything else
This is the real reason why FAANGs push for non-GPL licenses.
GPL's end goal is to build a community where developers, testers, power users and regular users connect with each other and share knowledge, not just code.
FAANGs want to wedge themselves as the middleman between developers and end users. They view such community as a threat.
well... ostensibly that is where the money can be made (at the point they meet the end-user)
The best explanation I have seen is the speculation that apples software patents are seen as a critical part of apples business model and competitive strategy, especially 10 years ago when their anti-gpl stance was formed. GPLv3 patent clause adds risk, especially the patent agreement clause, and if you intend to spend millions over software patent lawsuits then staying away from GPLv3 looks much more reasonable, especially if you ask the patent lawyers.
Presumably you mean that Apple specifically avoids GPL3 then, because bash was never distributed under anything other than the GPL to my knowledge. Bash moved from GPL2 to GPL3 though.
As for bash, in the latest release the default shell is zsh, though I think the old bash is still there, although clearly on the way out.
A lot of interesting LLVM use-cases are all about that, adding custom frontends or backends used in software that is distributed. Some random examples:
The Island platform that I linked to is one such commercial example, made and sold by RemObjects. OpenCL support in graphics drivers is another example.
GCC's Go frontend is still commercially supported as part of RHEL7 but was replaced by the more popular Go implementation in later versions.
GCC's Java frontend used to be commercially supported in the days before OpenJDK.
Modula-3 frontend was commercial, by Elego Software Solutions.
Then naturally Objective-C and Objective-C++ frontends used on NeXTSTEP and OpenSTEP.
I'd say nobody has ever asked, but that isn't true. One person in the test group actually read the entire eula and sent us $5 to get the source code. (I found out when his letter was returned to sender - we officially went through the entire release process just to fix the address at great expense. Even though it was still internal not released legal demanded we show good faith in correcting the problem) testers like that are worth far more than anyone pays them.
I did a quick Google and found this: https://github.com/marketplace/actions/continuous-benchmark
Lots of projects have "are we fast yet" type graphs but I'm not aware of a generic tool that also allows you to set alerts for fine grained benchmarks (I made a toy that alerts you to x-sigma increases in cache misses when testing compiler backend patches for example).
One of those projects that I actually want to build but slightly too dull to finish.
just checked and my build folder with llvm & clang is 3GB. That's a release build (pass -DCMAKE_BUILD_TYPE=Release !!) - you don't need a debug build unless you're hacking on llvm itself
> Then you need that much for the install
you want make install/strip, not make install (but why do you need to install ? you can run clang from the build dir just fine)
That's extremely wasteful, especially for debug builds.
Luckily, shared library builds of LLVM are easy. And gp might want to invest in a Threadripper. It's possible to compile all of LLVM in a few minutes nowadays :)
I just checked the size of the clang-10 binary that was built. It is fucking 1.9 GB big (gcc 9.2 on the same machine is 6.1 MB).
but you're comparing a debug, unstripped with a release, stripped build ! (also, 6.1 mb seems low for GCC ? the main GCC binary is cc1plus, not gcc / g++)
It’s a shame, one of the standout feature of llvm/clang used to be that it was faster than GCC. Today, an optimized build with gcc is faster than a debug build with clang. I don’t know if a 10x improvement is feasible, though; tcc is between 10-20x faster than gcc and clang, and part of the reason is that it does a lot less. The architecture of such a compiler may by necessity be too generic.
Here’s a table listing build times for one of my projects with and without optimizations in gcc, clang, and tcc. Tcc w/optimizations shown only for completeness; the time isn’t appreciably different. 20 runs each.
│ │Clang -O2 │Clang -O0 │GCC -O2 │GCC -O0 │TCC -O2 │TCC -O0 │
│Average time (s) │1.49 ±0.11│1.24 ±0.08│1.06 ±0.08│0.8 ±0.04│0.072 ±0.011│0.072 ±0.014│
│Speedup compared to clang -O2│ - │ 1.20 │ 1.40 │ 1.86 │ 20.59 │ 20.69 │
│Slowdown compared to TCC │ 20.68 │ 17.20 │ 17.72 │ 11.12 │ - │ - │
Did you mean "an optimized build with gcc is faster than a debug build with clang"?
The reason is that many of the optimization passes that run first, like dead code elimination, can remove a lot of code early on, so "optimized" builds end up processing significantly less code, which is inherently faster.
The OP might just not be aware of what a "debug build" is. The goal of a debug build is for the binary to execute your code as closely to how you wrote it as possible, so that you can easily debug it.
Their goal isn't "fast compile-times". If you want fast compile-times, try using -O1. At that level, both clang and gcc do optimizations that are known to be cheap and remove a lot of code, which speeds up compile-times significantly. Another trick to speed-up compile-times is to use -g0, and if you do not need exceptions, use -fno-exceptions, since those make the front-end emit much less data, which results in less data having to be processed by the backends.
Emitting debug symbols doesn't change compile times.
Or maybe my projects are the interesting ones :D
My project is c, which is why I can use tcc.
Anyway, that makes sense; in c++, there's a lot of 'extra' stuff, single lines of code that add up to much more than they would seem. I bet -O1 lets the compiler inline a lot of std::move, smart ptr semantics; elide monomorphisations, copy constructors/RVO; etc. Which just means less code to spit out the backend.
Yes for C what you mention makes perfect sense.
I agree with you about C++ as well. In particular, C++ templates end up expanding a lot of duplicate code, and at O1 the compiler can remove them.
> For every tested commit, the programs are compiled in three different configurations: O3, ReleaseThinLTO and ReleaseLTO-g. All of these use -O3 in three different LTO configurations (none, thin and fat), with the last one also enabling debuginfo generation.
I would have thought for developer productivity tracking -O1 compile times would be better wouldn't it?
I'm happy for the CI to spend ages crunching out the best possible binary, but taking time out of the edit-compile-test loop would really help developers.
As long as you can tolerate the warmup, and at least for Java it's not really a big deal for many apps these days because C1/C2 are just so fast, you get fast iteration speeds with pretty good code generation too. The remaining performance pain points in Java apps are things like the lack of explicit vectorisation, value types etc, which are all being worked on.
"#pragma once" in the header files helps as does using a pre-compiled header file.
Obviously, removing header files that aren't needed makes a difference too.
1) LLVM does not use pull-request and code-review isn't mandatory for the main contributors.
2) until a few months ago LLVM didn't even have any testing system before the code is pushed to the master branch: folks just push code directly after (hopefully) building/testing locally.
The real root cause is that no one cares enough to really invest deeply into this. There is also not a clear community guideline on what is acceptable (can I regress O2 compile time by 1% if I improve "some" benchmarks by 1%? Who defines the benchmarks suite? etc.)
There are occasionally some feeble attempts to improve that, but of course people are wary of the potential politics.
It would be expensive in terms of CI hours, but at least for the community as a whole it would probably be worth it on something run as often as a compiler.
- a slower build time should not be made at the expense of a more extensible compiler - one that can be modified easily to add capabilities and features to the build output
- a slower build time is acceptable if the build result executes faster or more efficiently. One slower compile vs one million faster executions is keeping your eye on the prize.
* release target build times aren't an issue. They can be done overnight and aren't part of the work cycle.
* un-optimized build times are part of the work cycle and should be as speedy as possible.
Emphasis added. This isn't true for many use cases. There are times when release build + single run is faster than debug build because run time is relatively long (e.g. scientific sims with small code bases + big loops). There are times when debug builds simply aren't sufficient (e.g. when optimizing code).
-O3 - take all the time you need, -O2 and below - please don't regress the performance of the compiler - as article states - we are happy enough with the level of output we currently get.
Not sure whether the -O2 and -O3 system is the exact right choice to communicate that. But any better system would also preserve the user's ability to make this trade-off.
Though I don't think there's necessarily anything magic about -O2. They could conceivably also only protect -O0 and perhaps -O1. Or give finer grained control over the trade-offs.
we should significantly slow down compile time for any non negligeable win of runtime performance BUT this slow down should only be incurred in optimised builds (>= 01) and have no side effect on debug compile time which should be fast and it almost does not matter if they get slower at runtime.
Rust doesn't replace careful planning of parallel infrastructure or performance optimization, but it makes it possible to maintain the parallel system.
GCC continues to emit relatively better debuginfo at similar optimization levels. Samy Al Bahra has written and talked about this a couple of times over the years.
This is a great post full of really interesting technical details. Don't be put off by the title!
It could have been easily avoided by picking a different title. Pointing this out might save others from making the same mistake.
If you care about threads not becoming distracted, the primary thing to cultivate is restraint. There's also downvoting and flagging. And anyone who notices a major distraction in a thread is very welcome to email firstname.lastname@example.org so we can look into it. That doesn't have to be a political flamewar—it could be just a generic tangent sitting as the top subthread.
Perhaps it's just that I'm not letting politics take over my life.
Oh they are, I just prefer to focus on good things rather than bad things.
> things are pretty good for you.
Things are pretty good for me, but it's not because of politics.
I'm not refuting your generalization about the slogan and how some people perceive it. It's definitely not good that we're in a situation where this is happening.
Please be very careful about assuming things about other people, else you will fall into the same trap as people of hate. I never indicated that the slogan and its background don't apply to me, and I'm not dismissing it.
I don't want this to become an arguement or be dismissive of anyone's views, I just want to make sure that people don't drag politics into a place where it doesn't belong, and not to make bad assumptions which don't help at all.
If you can afford not to focus on them, then trust me, they are absolutely not focused on doing that. Your experiences is very different from those who are actually suffering from them.
> Things are pretty good for me, but it's not because of politics.
Only because you class those things that go in your favour as "not politics" and only those that challenge your position as "politics".
You can stand up for the rights and freedoms of others without turning your nose.
Showing that something doesn't bother you reduces the power bullies and oppressors have over you.
You can even reframe it. Make America Socialist Again. See what I did there?
You kinda sound offended. Is there a Popperian analogue here, that we must be unoffended by everything but offense itself?
Hostility breeds hostility. Cultural segregation raises the barrier for empathy.
I don't always practice what I preach, though. In a recent thread I admitted a preference for language that puts censors on edge.
Do you disagree with the advice I'm giving here?
I'm advising writers that some people reading their content may make this association, so they should avoid the phrase unless that is their deliberate intent.
It sounds like a linguistic death spiral to me. Good luck.
Language has context and meaning outside of its immediate definition and will affect the perception of your written word. Don't make people think of authoritarianism and kids huddled in cages if you want to share your cool technical insights with the world.
The author of this piece is based in Berlin. I'm certain they weren't thinking about any negative potential connotations that this title could hold - they were just reusing a common phrase.
Maybe it is just a joke, a way to boost clicks, or some kind of myopic view that says technical people are above politics, a mix of that, none of the above, I don't know.
But let's not pretend it was not made on purpose.
Authors constantly adapt their writing to their audience.
Alternative titles that convey more information, invites no political discussion, and don't break HN guidelines:
- "Speeding up LLVM by 10%"
- "Reducing compilation times for LLVM 11"
- "10 optimisations for LLVM" (ok this one is click-baity)
Alternative titles the author would probably have self-consored to cater to audiences:
- "Guess what LLVM? hash tables faster than linear lookup" (accusatory, rude)
- "LLVM getting a Summer Body" (fatphobic)
- "Honey I Shrunk the Compilation Time", "Oh Hi CTMark" (pop-culture is generational, and frankly those jokes are horrible)
If someone critical of the US President "ironically" uses the phrase, as in "Foo is slow, so let's make Foo fast again", they inadvertently construct or reinforce a notion of "America was not great, which is why we needed to make America great again" -- something they might not agree with. Also, it's just constantly giving more exposure to someone who already has way too much of it. They would be running a political figure's propaganda for them. Many people will want to do that, but also very many people would not want to do it and should think a bit about a slogan's context before adopting it.
(I'm aware that others across the US political spectrum have used the phrase in the past. It doesn't matter, it is currently very strongly associated with one person.)
>make banking simple, intelligent, and personal again!
They said it doesn't have a controversial connotation since we are not recruiting in the US. IMO it absolutely does.
If you want a meme: "Faster than before"
Or maybe "Getting LLVM to compile as fast as it used to."
And 10 years ago I would have maybe suggested "Make LLVM fast again" but now it has become so strongly associated with a political party that I avoid it, precisely to avoid turning technical writings into political flamewars.
Or "Reclaiming my compilation time" :-)
Trump doesn't seem to be addressing those issues, but there is nothing wrong with the implication that America has lost some of its lustre.
Just like the author of a post called "Make LLVM Fast Again" may have identified a problem without actually fixing it. The first step to improvement is identifying the problem. LLVM has problems, America has problems. Seems fine to me.