The original argument did not involve which trips can be replaced by walking or cycling, merely that per distance driven cars are safer. This makes driving a car much safer than walking, which should give us pause and make us reconsider if it makes sense. It doesn't -- merely having and regularly using a car makes people travel greater distances. Without a car they wouldn't do it, and increased car dependance results in worse traffic safety. If the US had the same per capita death rate as the Netherlands, there would be around 12k deaths. Instead there are over 40k.
The additional kicker is that most of the pedestrians and cyclists killed die after a collision with a car. As it is, roughly two million people in the world die in traffic per year, almost all of it involving cars. So much for safety.
> Most of light pollution and noise in the cities comes from cars and car infrastructure.
While I love cities, I think that's not really the story. Cities would light up their streets with or without cars. Could you imagine if New York, somehow sans cars, with pitch black streets at night? The crime fear is way overdone - cities are very safe these says. But I think I still prefer streetlights.
> I guess the relation I'm failing to see in this argument is why skew or no skew is supposed to be a determining factor in whether or not the metric is to be applicable outside of Meta in the first place.
And I guess what I'm saying is that it ain't a determining factor in and of itself, but it's rather a tool to discern whether it is indeed applicable outside (or, for that matter, inside) Meta. If a metric does skew toward a specific form of "athleticism" or "productivity" at the expense of others, then it's worth asking whether there's a particular reason for using it v. it being arbitrarily selected.
That's not a great take in practice. The main example in the article is Rust and while they concentrate on the memory safety, modern languages which provide it also provide other great features. If we stick with Rust, we also get fewer concurrency issues (language enforced), logic issues (algebraic data types help), encoding issues (richer APIs help), etc.
Try to find a specific vulnerability class which actually has a chance of getting more likely if you move from C to Rust.
> I don't think adding a nice controller and cool games to play would be prohibitedly expensive.
Yeah, but it would be more expensive than dirt cheap.
I see it all the time; you wouldn't think a huge company would have a problem spending money on something. But, at the end of the day, it's all about that department's budgets and priorities.
If you get banned from eBay your SSN/phone number/email/browser fingerprint/address/etc are prevented from coming back. What system would enforce that for computation nodes?
The most interesting aspect of this research is that Google published it. They have a huge competitor in this area, it would make sense to stop publishing unless they think that OpenAI already does this.
My team at work hasn't created any meaningful value in a year. We've shipped nonstop but the things were tasked to work on just don't work out or even if they are well received simply don't add up to our salaries.
Families are already struggling when only the father is working.
Work is hard for the father, and raising kids is even harder for the mother.
Now both father and mother must have a job - and that's just to barely survive.
And the government expect us to have kids?
Fucking idiotic pricks.
Unless the government start doing something to ensure that all of those shiny GDPs are trickling down, instead of ever continuously pooled up, the situation will only get worse.
I don't know if they've fixed this, but it used to be that the pointer target was inside the black border, not at the actual tip of the pointer. That's not really a problem if the pointer is at the default tiny size, but when you make it bigger suddenly everything you point at is offset by the thickness of the border, which makes it borderline unusable.
MYbe it’s time for them to realize there is a point where you have all of the customers, and it’s ok to be boring and just pay a dividend instead of constant growthgrowthgrowthgrowth.
HPC isn't exactly special, and neither to some extent is Spack. Spack's gotten some attention in talks at, e.g., CppCon.
The big differentiator is really the degree to which people want to tune and customize their builds in HPC vs. other communities, and the diversity of hardware that needs to be supported. Things that stand out to me:
- different applications' needs to customize the builds of their dependencies
- e.g., one app might need HDF5 with MPI support, another might want it
built without. Those are two different, incompatible HDF5 builds.
- need for specific microarchitecture builds to take advantage of vectorization
- need for GPU support (for NVIDIA, AMD, *and* Intel GPUs)
- need to use sometimes vendor-specific tuned system libraries like MPI, cray-libsci, mol
- need for specific *versions* of dependencies (solvers, mesh libs, etc.) for
numerical reproducibility
- need to integrate across languages, e.g. C, C++, Fortran, Python, Lua, perl, R,
and dare I even say Yorick.
Most of these requirements are not so dissimilar from peoples' dev environments in, say, the AI community, where people really want their special version of PyTorch. Where monorepos are common in industry, they really haven't taken off in the distributed, worldwide scientific community, so you get things like Spack that let you keep rebuilding the world -- and all the microcosms in it.
So I'd say Spack is not so much a special package manager as a much more general one. You can use it for combinatorial deployments at HPC centers, dev workflows for people dealing with multi-physics and other complex codes, and as sort of a distributed poly repo with lock files.
The intent was never to be specific to HPC, and I would love to see broader adoption outside this community.