That's for the whole assembly, obviously. If you were designing for swaps, the BMS and other active components would remain in place and only smaller packs of cells would be swapped. That also addresses the equally specious "don't want to swap out a critical component that's half my car's value" argument.
We've seen a multitude of issues, like jobs failing to start, getting too delayed (also the infamous "if your cronjob fails too much it will stop working forever ")
This 2011 addition to the XZ Utils Wikipedia page is interesting because a) why is this relevant, b) who is Mike Kezner since he's not mentioned on the Tukaani project page (https://tukaani.org/about.html) under "Historical acknowledgments".
Arch Linux played an important role in making this compression software trusted and depended upon. Perhaps not a coincidence, but at the very least, such a big project should more carefully consider the software they distribute and rely on, whether it's worth the risk.
> As explained earlier, due to the physical limitations of vinyl, there are limits as to how loud you can press a record, and because vinyl is “for audiophiles” – there is less incentive for record companies to compromise the quality of vinyl releases. As a result, many vinyl records are mastered differently to the CD release with more dynamic range and at lower volumes.
But I read some place that Radiohead themselves preferred the more compressed sound, perhaps owing to listening in a car rather than a high fidelity setup.
I've seen this trotted around and I think its absolute bullshit. I've asked multiple people in the music business, and they've all told me they use the same masters for CDs and Vinyl.
Mastering engineer here. I supply less compressed versions for vinyl and would not sign under it if it was the same as CD/streaming. What labels do after the fact is another story...
I would like to think that it is quite common. Certainly for guys who provide lacquer/DMM cutting services.
There is an issue with vinyl brokers and certain unnamed plants that advertise "mastering" that consists of running any delivered audio through proprietary software that "fixes" all physically problematic issues that could affect cutting or playback (excessive sibilance, excessive negative stereo correlation etc.). There is minimal listening involved and they can cut almost any audio. All optimised for maximising the factory throughput, not sound quality.
Personally I have a hope that this will become less of an issue in future as vinyl is getting more popular and people little bit more educated.
I would guess to agree: the "audiophile" is a microscopically small segment of the music market, and music companies, let alone manufacturers, are NOT going to spend extra time money on producing stuff for specialty segments (hence the MoFi fiasco). Marketing is enough! Most so-called audiophiles also are not really into DR or "dynamic sound" or anything but just their audio preferences, whether that's cool/expensive hardware or hanging out on head-fi.
I have two versions of the same album, one on CD, one on vinyl. They don't sound the same and I prefer the version on vinyl, I am not implying it objectively sounds better, maybe it sounds worse at the wave level, but it sounds better to me, it seems much more "present".
Could you teach me what is the reason for this ?
Vinyl's dynamic range is way inferior to CD's one, that makes it a natural compressor. Most like vinyl sound because it's compressed as well, albeit not awfully bad like modern digital productions.
Many vinyl records made in the 90s were mastered digitally before printing, and audiophiles swear they hear the same magic sound although what they listen to comes from 100% digital material.
> it seems much more "present"
That could be due to some low frequencies that vinyl can't reproduce and are reduced to avoid distortion. Also vinyl's poor crosstalk figures could play a role here.
> and audiophiles swear they hear the same magic sound although what they listen to comes from 100% digital material
Not unlikely, as the signal did get converted back to analog, and the physical media's characteristics influence the mastering even when it's being done digitally.
Differences in the sound waves that reach the ear can come from the audio data being written to and retrieved from an imperfect recording medium (vinyl), as well as differenced in frequency responses between the amplifiers or speakers used after the audio is read.
"Presence" is usually associated with high frequency content. Turn up the high frequencies and the music seems more present. Therefore, differences in media/amplifier/speaker high frequency response will make the music seem more or less "present".
Metallica's Death Magnetic is the worst example I know of an album that's been utterly butchered by the loudness war. It is absolutely unlistenable - the compression makes my ears bleed.
And it's a crying shame because in terms of raw songwriting it could have been the best thing they've put out since the Black Album. What a waste of some good riffs.
There are multiple unofficial fan remasters based on tracks extracted from Guitar Hero, and now a Mastered for iTunes version too. Try these, they sound far better.
Yeah, it's mostly wishful thinking. The primary audience for records is not audiophiles, it's collectors who often don't even own a record player putting them up on display. Unless an artist/mastering engineer has a particular fondness for the medium they're not going to put much effort into the "analog" master.
They mention https://pola.rs/ and highlight how it has the exact same properties (out of core etc) but don't include it in benchmarks and no mention on their detailed comparison page.
It's because they are comparing to distributed solutions only. Polars is one of the fastest solution for single computer workflow, but it doesn't support distributed workflow.
Hello! Daft developer here - we don’t directly use Polars as an execution engine, but parts of the codebase (e.g. the expressions API) are heavily influenced by Polars code and hence you may see references to Polars in those sections.
We do have a dependency on the Arrow2 crate like Polars does, but that has been deprecated recently so both projects are having to deal with that right now.
I don't see any direct dependencies to polars. But in the comment, they wrote that some part are "taken from", "influenced by", "adapted from" and "based on" polars.
The filmed entertainment industry is pretty massive and AI is going to make a big impact there (for better or worse), that's probably something most folks can understand. CGI is already used everywhere and generative AI takes this trend much further.
Not all software is shipped using containers. For example, with Deno, you can compile your application into a single executable binary. By having permissions built into the runtime, this means you can import a third-party package but only allow network requests to go to specific URLs; this way, even if malicious code is referenced in the app, it can't phone home.
Why? `--allow-all` is the epitome of trivial. You can even wrap the deno executable in a script that passes that to it every time if that's what you really need.