According to the article, “the administration maintains they have not fired positions essential to public safety”. I guess it depends on whose viewpoint is used to determine public safety.
While I recognize these are federal lands, I wonder if the states will pick up the slack to keep the parks open. In the end, Trump wants the states to do more with less federal government involvement, thus, I am not surprised at the force reduction. And, the layoffs seem to be happening for a while now (way before Trump).
> if the states will pick up the slack to keep the parks open
maybe so, but then states have to raise taxes to pay for it
the fed gov gets to say "look at all the money we saved you"!! (actually they're putting the savings to border security, but anyway), but in the end taxpayers don't actually pay any less
that's besides the fact that these are federal lands, not state lands; and that centralized management is generally more cost effective
Yes, I understand. But, why can’t states help provide funding for the people who take care of the land? The states wouldn’t take over the land, just help maintain it. Volunteers, non-profits, etc could certainly help out here.
The feds aren't just going to let the land sit unbothered. They're going to sell it off and/or allow resource extraction. Nothing a non-profit can prevent.
For those of you who use "zrepl" for ZFS replication, you may want to checkout the fork from "dsh2dsh" - https://github.com/dsh2dsh/zrepl. This new fork provides a bunch of updates to zrepl including:
* Local time-zones instead of only UTC
* Faster replication jobs
* Enhanced cron job definitions for pull/push jobs
* New configuration for control and prometheus services
The original code works very well, but the new fork has added some enhancements the original author chose not to include.
For our production PGSQL databases, we use a combination of PGTuner[0] to help estimate RAM requirements and PGHero[1] to get a live view of the running DB. Furthermore, we use ZFS with the built-in compression to save disk space. Together, these three utilities help keep our DBs running very well.
We were running very large storage volumes in Azure (+2TB) and wanted to leverage ZFS compression to save money. After running some performance testing, we landed on a good balance of PGSQL and ZFS options that worked well for us.
It is - depending on the read-vs-write workload. For our workload, we landed on a record size (blocksize) of 128K which gives us 3x-5x compression. Contrary to the 8KB/16KB suggestions on the internet, our testing indicated 128K was the best option. And, using compression allows us to run much smaller storage volume sizes in Azure (thus, saving money).
We did an exhaustive test of our use-cases, and the best ZFS tuning options with Postgres we found (again, for our workload):
What? Every car can automatically change lanes, automatically merge into traffic, automatically exit to an off-ramp, automatically go around a round-about, etc? I think there may be a language barrier here because most cars/trucks I see on the road cannot automatically do any of these things.
Yes to all of those. Adaptive cruise control. Lane change assist. Auto braking. Automatic exits. This stuff has been standard for many years now, and every major brand is touching L2/L3 autonomy. Manufacturers just don't market it as "autopilot" or "fully self driving", because that's not what it is.
Sorry - just bought a new Toyota RAV-4; it does not have auto-lane change, the ability to auto merge onto the highway, the ability to auto-stop at the stop-light, etc.
It does have adaptive cruise control and the ability to stay inside the lines, but it absolutely will not automatically change lanes or exit off the freeway.
It was 3 months of work and you probably saw our alphas kicking around for a while.
We had been thinking about this rewrite for like 6 months, but put it off for a while. Glad we dove into it since the finished product is so much better.
Same here. Turned in my 14" MBP M1 for a M2 15" MB-Air (1TB NVMe, 24G RAM). Lightweight, fast, excellent 15" screen. Hands down the best laptop I have ever had (and I have a stack of MBP boxes in my closet from the past 10yrs).
We are currently testing a number of systems with 12x 30TB NVMe drives with Debian 12 and ZFS 2.2.0. Each of our systems have 2x 128G EPYC CPUs, 1.5TB of RAM, and a dual-port 100GbE NICs. These systems will be used to run KVM VMs plus general ZFS data storage. The goal is to add another 12x NVMe drives and create an additional storage pool.
I have spent an enormous amount of time over the past couple of weeks tuning ZFS to give us the best balance of reads-vs-writes, but the biggest problem is trying to find the right benchmark tool to properly reflect real-world usage. We are currently using FIO but sheer number of options (depth queue, numjobs, libaio vs io_uring) makes the tool unreliable.
For example, comparing libaio vs io_uring with the same options (numbjobs, etc) makes a HUGE different. In some cases, io_uring gives us double (or more) performance than libaio, however, io_uring can produce numbers that don't make any sense (eg: 105GB/sec reads for a system that maxes out at 72B/sec). That said, we were able to push > 70GB/secs large-block reads (1M) from 12x NVMe drives which seems to validate ZFS can perform well on these servers.
OpenZFS has come a long way from the 0.8 days, and the new O_DIRECT option coming out soon should give us even better performance for the flash arrays.
If you are seeing unreasonably fast read throughput, it is likely that reads are being served from the ARC. If your workload will benefit from the ARC, you may be seeing valid numbers. If your workload will not benefit from the ARC, set primarycache=metadata on the dataset and rerun your test, potentially with a pool export/import or reboot to be sure the cache is cleared.
The fact that fio has a bunch of options doesn’t make the tool unreliable. Not understanding the tool or what you are testing makes you unreliable as a tester. The tool is reliable. As you learn you will become a more reliable tester with it.
After seeing some of the unrealistic numbers, I set primarycache=metadata just like you pointed out. And, you are correct, I need to learn to be a better tester...
I design similar NVMe-based ZFS solutions for specialized media+entertainment and biosciences workloads and have put massive time into the platform and tuning needs.
Also think about who will be consuming data. I've employed the use of an RDMA-enabled SMB stack and client tuning to help get the best I/o characteristics out of the systems.
It depends on the use case. For high-speed microscopes, I may get a request that says, "we need to support 4.2 Gigabytes/second of continuous ingest for an 18-hour imaging run." - In those situations, it's best to test with realistic data.
For general video and media workloads, it may be something like, "we have to accommodate 40 editors working over 10GbE (2 x 100GbE at the server) and minimize contention while ingesting from these other sources".
I work with iozone to establish a baseline. I also have a "frametest" utility that helps when mimicking some of the video characteristics.
reply