I'm glad to see someone else describe their experience this way too.
bazel has arrived at $WORK and it has been a non-trivial amount of work for even the passionate advocates of bazel. I know it was written by the Very Smart People at google. They are clearly smarter than me so I must be the dummy. Especially since I never passed their interview tests. :-)
Of course given all things google, by the time I'm fully onboard the train, the cool kids will be making a new train and then I'll have to hop onto that way to enjoy the rewards of the promised land that never quite seem to arrive.
> know it was written by the Very Smart People at google
For Google. That's the key. I have the privilege of experiencing both sides, having been at Google for nine years. I never had a problem with Blaze, but using Bazel in a smaller company has been extremely painful. I think there are just very few places that have the exact problems as Google where something like Bazel would be a great fit.
That's the rub. It provides scalability for very large organization, of which, there are few. It's similar to running OpenStack. Meta also has some projects like this, such as buck2 which lacks the really good virtual FS acceleration stuff (eden). Megacorp FOSS tend to skew towards offering whizbang features that are incomplete, complicated, poorly documented, and require a lot of extra work.
Actually if you could make something like github, where all software would be part of a single megarepo and built constantly that would be incredibly useful, and bazel would be excellent for that (or at least the closest thing we have to reasonable)
The problem with bazel and almost every other build system (all except the "scan the source files and build a dependency graph" ones) is that you'll be writing build instructions for all your dependencies that aren't using it. If that was done for you, they'd be incredible.
Compiling things and wanting a robust build cache so developers spend less time waiting isn't a problem remotely unique to Google. You might not have Google scale to hire a team of developers to be able to optimize it to the N'th degree like they can, but holy shit we are not gonna use Makefiles for advanced build systems anymore.
> Compiling things and wanting a robust build cache so developers spend less time waiting isn't a problem remotely unique to Google.
That wasn't my argument at all. Plenty of modern tools address this exact need; it isn't unique to Bazel. If you read the article, the author made many interesting remarks on how Bazel reflects the unique design choices of Blaze, which were often picked due to Google's needs.
My point is that when people hit these barriers, they need to understand that it's not because they are unintelligent or incapable of understanding a complex system. That's what the OP I responded to was saying, and I was just providing some advice.
While Jefferson is an understandably venerated figure in the foundation of America, he was not active in the creation of the constitution. He of course wrote The Declaration of Independence and while having misgivings about the strength of the executive branch, he went on to become a hugely influential president.
On the filp side, it took him a while to come around to see the folly that was the then French Revolution whereas his political critics - notably Adams, Washington, and the frequently maligned Hamilton, were quick to keep their distance from it.
I like Jefferson but he sometimes seems to have an overly rosy and romantic view of revolutions. Tearing down is easy. Building up is very hard.
I think that's only generally true for the period of time where the new tool has yet to achieve full functional parity with what it replaced. As that functionality gap is closed, the performance increase usually declines too.
$ curl -LO 'https://burntsushi.net/stuff/subtitles2016-sample.en.gz'
$ gzip -d subtitles2016-sample.en.gz
$ time rg -c 'Sherlock Holmes' subtitles2016-sample.en
629
real 0.099
user 0.063
sys 0.035
maxmem 923 MB
faults 0
$ time LC_ALL=C grep -c 'Sherlock Holmes' subtitles2016-sample.en
629
real 0.368
user 0.285
sys 0.082
maxmem 25 MB
faults 0
$ time rg -c '^\w{42}$' subtitles2016-sample.en
1
real 1.195
user 1.162
sys 0.031
maxmem 928 MB
faults 0
$ time LC_ALL=en_US.UTF-8 grep -c -E '^\w{42}$' subtitles2016-sample.en
1
real 21.261
user 21.151
sys 0.088
maxmem 25 MB
faults 0
(Yes, ripgrep is matching a Unicode-aware `\w` above, which is why I turned on GNU grep's locale feature. To make it apples-to-apples.)
Now to be fair, you did say "usually." But actually, sometimes, even when functional parity[1] has been achieved (and then some), perf can still be wildly improved.
[1]: ripgrep is not compatible with GNU grep, but there shouldn't be much you can do with grep that you can't do with ripgrep. The main thing would be stuff related to the marriage of locales and regexes, e.g., ripgrep can't do `echo 'pokémon' | LC_ALL=en_US.UTF-8 grep 'pok[[=e=]]mon'`. Conversely, there's oodles that ripgrep can do that GNU grep can't. For example, transparently searching UTF-16. POSIX forbids such wildly popular use cases (e.g., on Windows).
I've been using ripgrep for years now and I'm still blown away by its performance.
In the blink of an eye to search a couple of gigabytes.
I just checked and did a full search across 100 gigabytes of files in only 21 seconds.
The software is fantastic, and moreover it goes to show what our modern hardware is capable of. In these days of unbelievable software waste and bloat, stuff like ripgrep, dua, and fd reminds me there is hope for a better world.
> I don't think rg speed can be attributed to Rust.
I didn't say it was, and this isn't even remotely close to my point. The comment I was replying to wasn't even talking about Rust versus C. Just new tools versus old tools.
> ripgrep gains comes from a ton of brilliant optimizations and strategies done by it's author. They wrote articles about such tricks.
Not necessarily so. Sometimes a better architecture and addressing long-standing technical debt give large permanent gains. Compare yarn vs npm, or even quicksort vs bubble sort.
I think the key difference between these two views on rights could be summed up as. One side sees rights as a moral sanction to an action in a social context. The other side sees a right as a moral claim to be fullfilled by someone.
Well I think if you have a moral claim over someone else - their failure to fullfill that claim is going to be an issue that has to get resolved with or without their consent.
My attempts to describe what I think is the logical consequence of that view of rights should not be taken as an endorsement of it.
This is exactly what I think the danger of bad philosophy leads to, people confusedly believing they have a claim to control people just for existing. The right to pursue life do not require an automatic claim to control the life of others merely for existing. The "right" to a certain quality of life does, however imply a belief that it must be provided by some means.
Every claim to property over any parcel of land is fundamentally a coercive taking in that it tries to forbid everyone else, billions of people, from using that piece of land. Any attempt to enforce that land claim is an initiation of aggression against other people freely moving about in the world. In that way any system of property, libertarian or not, has at its root coercion. Some coercion is table stakes for civilization and that is ok.
> Every claim to property over any parcel of land is fundamentally a coercive taking in that it tries to forbid everyone else,
Something can't be "taken" if it's not owned.
Property rights systems exist because people use property to achieve their life's values, and having billions of people argue over how to use land is not enactable (I hope that's obvious).
People's lives are not served by telling billions of people who want to use a plot of land to "fight it out", and thus governments have reasonably enacted systems that gives gives people both physical and intellectual property based off their. efforts.
This isn't to be said that property systems can't be improved, our intellectual rights property system obvious has many ways it could be improved (and it changes as we discover new knowledge). The end goal of these political policies though is to create social systems that allow individuals to maximally pursue their life.
Property systems goals are NOT to give everyone a certain quality of life.
Take a look at even the most communist/anarchist society you can imagine (the kind with people who hate those who own property), and you will see systems of an authority being grasped for that help coordinate use of material means in order to avoid violence. Reality cannot be escaped.
Sure it can, someone can take land in the plain everyday sense that they occupy it and tell others to stay out. But that act, and any attempts to enforce it, is coercive and aggressive. Which proves that any system with property rights, including every libertarian proposal ever made, is coercive. That's ok but it also means that your "is it voluntary?" complaints are futile and self-defeating.
> Property rights systems exist because people use property to achieve their life's values
What's your empirical evidence for that claim? The actually existing legal construct of property in countries around the world, and in international treaties, in fact serves a whole range of goals. In every prosperous country on earth there is room for both private property and taxation for public provision. In empirical studies of life satisfaction and happiness the top is consistently dominated by democratic countries with extensive welfare states funded by taxes
https://happiness-report.s3.amazonaws.com/2024/WHR+24.pdf#pa...
I think the intent of highlighting that sentence ('The models will be developed within Europe's robust regulatory framework') was to draw attention to the fact that the sponsors will not move fast nor achieve anything of note. To put it more sarcastically, with sponsors like that, who needs others throwing down roadblocks!
reply