Hacker Newsnew | past | comments | ask | show | jobs | submit | mborch's commentslogin

Would be nice to see the actual wording in the cable, but I suppose Reuters are not allowed to publish that; we get a cable paraphrasing a cable.


If batch sizes are sufficiently large, for example by staging updates, is this really necessary to achieve good insert performance?


Google claims "On average, EV batteries weigh around 454 kg" ... pretty heavy stuff to carry around.


That's for the whole assembly, obviously. If you were designing for swaps, the BMS and other active components would remain in place and only smaller packs of cells would be swapped. That also addresses the equally specious "don't want to swap out a critical component that's half my car's value" argument.


If we can do it with scooters today, I bet you we can do it with cars, just need some stronger mechanical arms and whatnot.


With scooters you handle the battery by yourself.

Unless you plan to scale human strength in the next years, it won’t be the same solution.



We've seen a multitude of issues, like jobs failing to start, getting too delayed (also the infamous "if your cronjob fails too much it will stop working forever ")

Though it seems they rebuilt the controller to address most of the issues https://kubernetes.io/blog/2021/04/09/kubernetes-release-1.2...


This 2011 addition to the XZ Utils Wikipedia page is interesting because a) why is this relevant, b) who is Mike Kezner since he's not mentioned on the Tukaani project page (https://tukaani.org/about.html) under "Historical acknowledgments".

https://en.wikipedia.org/w/index.php?title=XZ_Utils&diff=pre...

Arch Linux played an important role in making this compression software trusted and depended upon. Perhaps not a coincidence, but at the very least, such a big project should more carefully consider the software they distribute and rely on, whether it's worth the risk.


> Arch Linux played an important role in making this compression software trusted and depended upon.

because of the way arch distributes packages? then what you think about freebsd?


If you check the history of that IP address, it added Mike Kezner to other pages. No clue why.


Here's another reference on this: https://www.yoursoundmatters.com/vinyl-vs-cd-in-the-loudness....

> As explained earlier, due to the physical limitations of vinyl, there are limits as to how loud you can press a record, and because vinyl is “for audiophiles” – there is less incentive for record companies to compromise the quality of vinyl releases. As a result, many vinyl records are mastered differently to the CD release with more dynamic range and at lower volumes.

But I read some place that Radiohead themselves preferred the more compressed sound, perhaps owing to listening in a car rather than a high fidelity setup.


I've seen this trotted around and I think its absolute bullshit. I've asked multiple people in the music business, and they've all told me they use the same masters for CDs and Vinyl.


Mastering engineer here. I supply less compressed versions for vinyl and would not sign under it if it was the same as CD/streaming. What labels do after the fact is another story...


That's great to hear (!) - any idea how common this is with other engineers?


I would like to think that it is quite common. Certainly for guys who provide lacquer/DMM cutting services.

There is an issue with vinyl brokers and certain unnamed plants that advertise "mastering" that consists of running any delivered audio through proprietary software that "fixes" all physically problematic issues that could affect cutting or playback (excessive sibilance, excessive negative stereo correlation etc.). There is minimal listening involved and they can cut almost any audio. All optimised for maximising the factory throughput, not sound quality.

Personally I have a hope that this will become less of an issue in future as vinyl is getting more popular and people little bit more educated.


I would guess to agree: the "audiophile" is a microscopically small segment of the music market, and music companies, let alone manufacturers, are NOT going to spend extra time money on producing stuff for specialty segments (hence the MoFi fiasco). Marketing is enough! Most so-called audiophiles also are not really into DR or "dynamic sound" or anything but just their audio preferences, whether that's cool/expensive hardware or hanging out on head-fi.


> the "audiophile" is a microscopically small segment of the music market

A small portion of the market but the portion most willing to spend large amounts of money on music and music related products.


Guilty as charged, because the emotional return is exponentially high. But we're still microscopic.


I have two versions of the same album, one on CD, one on vinyl. They don't sound the same and I prefer the version on vinyl, I am not implying it objectively sounds better, maybe it sounds worse at the wave level, but it sounds better to me, it seems much more "present". Could you teach me what is the reason for this ?


Vinyl's dynamic range is way inferior to CD's one, that makes it a natural compressor. Most like vinyl sound because it's compressed as well, albeit not awfully bad like modern digital productions. Many vinyl records made in the 90s were mastered digitally before printing, and audiophiles swear they hear the same magic sound although what they listen to comes from 100% digital material.

> it seems much more "present"

That could be due to some low frequencies that vinyl can't reproduce and are reduced to avoid distortion. Also vinyl's poor crosstalk figures could play a role here.


> and audiophiles swear they hear the same magic sound although what they listen to comes from 100% digital material

Not unlikely, as the signal did get converted back to analog, and the physical media's characteristics influence the mastering even when it's being done digitally.


Differences in the sound waves that reach the ear can come from the audio data being written to and retrieved from an imperfect recording medium (vinyl), as well as differenced in frequency responses between the amplifiers or speakers used after the audio is read.

"Presence" is usually associated with high frequency content. Turn up the high frequencies and the music seems more present. Therefore, differences in media/amplifier/speaker high frequency response will make the music seem more or less "present".


Stadium Arcadium for example is definitely different on vinyl. The CD version and the ‘audiophile’ 24bit version are both horribly compressed.


Metallica's Death Magnetic is the worst example I know of an album that's been utterly butchered by the loudness war. It is absolutely unlistenable - the compression makes my ears bleed.

And it's a crying shame because in terms of raw songwriting it could have been the best thing they've put out since the Black Album. What a waste of some good riffs.


Famously Rush`s Vapor Trails was also a victim with the band and fans unhappy with the mixing. Luckily a remix was produced...

https://popdose.com/popdose-qa-david-bottrill-on-rushs-vapor...


There are multiple unofficial fan remasters based on tracks extracted from Guitar Hero, and now a Mastered for iTunes version too. Try these, they sound far better.


Yeah, it's mostly wishful thinking. The primary audience for records is not audiophiles, it's collectors who often don't even own a record player putting them up on display. Unless an artist/mastering engineer has a particular fondness for the medium they're not going to put much effort into the "analog" master.


Now they do but they didn’t used to.


They mention https://pola.rs/ and highlight how it has the exact same properties (out of core etc) but don't include it in benchmarks and no mention on their detailed comparison page.


It's because they are comparing to distributed solutions only. Polars is one of the fastest solution for single computer workflow, but it doesn't support distributed workflow.


Ah of course, that makes sense, thanks. In their comparison, perhaps they should still mention that just to clear up any confusion.


Oh yes good point! We'll be sure to add more details about comparisons with local dataframe libraries such as Pandas/Polars/DuckDB.


I was told they use Polars engine; so they "just" build a distributed system around it


Hello! Daft developer here - we don’t directly use Polars as an execution engine, but parts of the codebase (e.g. the expressions API) are heavily influenced by Polars code and hence you may see references to Polars in those sections.

We do have a dependency on the Arrow2 crate like Polars does, but that has been deprecated recently so both projects are having to deal with that right now.


I don't see any direct dependencies to polars. But in the comment, they wrote that some part are "taken from", "influenced by", "adapted from" and "based on" polars.

[0] -- https://github.com/search?q=repo%3AEventual-Inc%2FDaft%20pol...


yes exactly, they """took""" part of the Polars engine code.


The filmed entertainment industry is pretty massive and AI is going to make a big impact there (for better or worse), that's probably something most folks can understand. CGI is already used everywhere and generative AI takes this trend much further.


ChatGPT knows how to answer this question, but how I don't know. Perhaps it's programmed to answer that question?


> A program run with Deno has no file, network, or environment access unless explicitly enabled.

You can do this using containerization technology, no need to invent this per language runtime.


Not all software is shipped using containers. For example, with Deno, you can compile your application into a single executable binary. By having permissions built into the runtime, this means you can import a third-party package but only allow network requests to go to specific URLs; this way, even if malicious code is referenced in the app, it can't phone home.


Yes but now in non-linux systems you have the pretty large overhead of that.


On MacOS, you have built-in sandboxing via "sandbox-exec" which shouldn't incur any noticeable overhead. It's used by Chrome, Bazel, etc.

Not sure what's available on Windows.


Windows has a lot of things in this department. https://github.com/microsoft/Windows-Sandbox-Utilities sounds similar to what you describe, but there are also finer-grained APIs: https://learn.microsoft.com/en-us/windows/win32/secauthz/app...


Agree. This feature just make it worse as a script language, which are supposed to have rapid development.


You just add a flag to the command line to give permissions. It won't harm your productivity.


Even without flags, it will ask to allow access interactively instead of silently aborting.


Why? `--allow-all` is the epitome of trivial. You can even wrap the deno executable in a script that passes that to it every time if that's what you really need.


Even better, you can do `-A`


Yes!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: