Hi, I'm the author of the article! Thank you for the awesome description of the various vm.dirty_* sysctls.
The problem described in my post was not _directly_ related to the kernel flushing dirty pages to disk. As such, I'm not sure that tweaking these sysctls would have made any difference.
Instead, we were seeing the kernel using too much CPU when it moved inodes from one cgroup to another. This is part of the kernel's writeback cgroup accounting logic. I believe this is a related but slightly different form of writeback problems :)
Hey, I agree that tweaking these probably wouldn't have made much difference, but tuning/reducing the dirty_bytes could calm the writeback stampede and smooth that bump, potentially getting rid of whatever race might have been happening. Regardless, disabling the cgroup accounting there is the right thing to do, especially as you don't need it. Tbh, the main reason I wrote most of that was as background to explain the cgv1 vs v2 differences and why they're there (and because I was stuck in traffic for like 45 mins :/)
If you're ever in the mood to revisit that problem you should try disabling that discard flag and see if it makes a difference. Also, if it was me, I'd have tried setting LimitNOFILE to whatever it is in my shell and seeing if the rsync still behaved differently.
Anyway - thoroughly enjoyed your article. You should write more :)
Yes, Con Ed now has smart meters that report electricity usage in realtime at 15 minute intervals. If you use Home Assistant, you can perhaps make use of the Opower integration to get this data: https://www.home-assistant.io/integrations/opower/
Although implementing the realtime API in the Opower integration has not yet been completed. That said, I don't think it would be too hard to implement. See: https://github.com/tronikos/opower/issues/24
This realtime data is also available and graphed on your account page on the Con Ed website and mobile app.
I wrote my own code that uses Con Ed's realtime API and writes the data to Prometheus so that I can view it in Grafana. My code was heavily influenced by Home Assistant's Opower integration code. Here's my code: https://github.com/dasl-/pitools/blob/main/sensors/measure_e...
I submitted this message, feel free to copy the same text and submit yourself also:
-----------------------------
I recently became aware that the Living Computers Museum, which was created by Paul Allen (Microsoft co-founder), is shutting down. As someone in the technology industry, I find that very sad! The museum was really magical. I'm wondering if the Gates Foundation can step up and save the museum from closing?
Fully depends on the model, how much conversational context you provide, but if you keep things to a bare minimum, ~< 5 seconds from message received to starting the response using Llama 3 8B. I'm also using a vision language model, https://moondream.ai/, but that takes around 45 seconds so the next idea is to take a more basic image captioning model and insert it's output into context and try to cut that time down even more.
I also tried using Vulkan, which is supposedly faster, but the times were a bit slower than normal CPU for Llama CPP.
it works for me on linux, not sure about other OS's. Although I'm now noticing that the article linked in the original post says that Ruby has a pure Ruby replacement for readline: Reline. So I wonder if it will not work with more recent versions of Ruby that use Reline?
> One of the downsides of the Linux cp command, is copying a file larger than RAM will destroy every existing entry in the file cache. Under normal circumstances this is a good thing, since a least recently used strategy usually works. However it can be problematic if you're just organizing your files on a production system where you don't want to disrupt performance. As far as I know, no standard command line utility offers a way to exploit this functionality.
youtube has recently been implementing download speed throttling on some video downloads. See: https://github.com/ytdl-org/youtube-dl/issues/29326 . Youtube-dl does not yet have a solution for this occasional download speed throttling.
The problem described in my post was not _directly_ related to the kernel flushing dirty pages to disk. As such, I'm not sure that tweaking these sysctls would have made any difference.
Instead, we were seeing the kernel using too much CPU when it moved inodes from one cgroup to another. This is part of the kernel's writeback cgroup accounting logic. I believe this is a related but slightly different form of writeback problems :)