> With fewer people in the office, the cost of excess federal office space has become a concern. Last year, the Government Accountability Office concluded that 17 of the 24 largest federal agencies used on average only 25% of their office space.
And the other was:
> As of August, 98% of Education Department employees were eligible to work from home and more than half were working remotely, according to the OMB. One of the agencies with the lowest office use rates, only 9%, was the federal government’s human resources agency, the Office of Personnel Management.
Those are crazy stats. I can understand that level of remote work for a company that is built for it from the ground up. But at the Federal government level that's a stretch.
It's not going to be difficult to convince the public that the intersection of federal bureaucracy and work-from-home is going to be rife with inefficiencies.
>> I can understand that level of remote work for a company that is built for it from the ground up. But at the Federal government level that's a stretch.
The 2020 pandemic happened and practically all jobs that could go fully remote did. After the pandemic ended, many jobs kept some level of remote work as an option.
How is it "a stretch" to think that federal government desk jobs could be any different from private sector / corporate desk jobs in terms of remote work? If the desk job workers can VPN in and attend meetings over MS Teams, Zoom, etc. they can still work as they did during the pandemic.
Just as is the case with many businesses (e.g. Amazon), return to office mandates are less about efficiency and more about control over workers. If remote work is less efficient, the business production metrics should show this. What do the business production metrics show?
The same would be true for federal government desk jobs. Where is the data that supports this decision?
Neat stuff. I think everybody with an interest in NFS has toyed with this idea at some point.
> Under the hood, customers mount a Regatta file system by connecting to our fleet of caching instances over NFSv3 (soon, our custom protocol). Our instances then connect to the customer’s S3 bucket on the backend, and provide sub-millisecond cached-read and write performance. This durable cache allows us to provide a strongly consistent, efficient view of the file system to all connected file clients. We can perform challenging operations (like directory renaming) quickly and durably, while they asynchronously propagate to the S3 bucket.
How do you handle the cache server crashing before syncing to S3? Do the cache servers have local disk as well?
Ditto for how to handle intermittent S3 availability issues?
What are the fsync guarantees for file append operations and directories?
> How do you handle the cache server crashing before syncing to S3? Do the cache servers have local disk as well?
Our caching layer is highly durable, which is (in my opinion) the key for doing this kind of staging. This means that once a write is complete to Regatta, we guarantee that it will eventually complete on S3.
For this reason, server crashes and intermittent S3 availability issues are not a problem because we have the writes stored safely.
> What are the fsync guarantees for file append operations and directories?
We have strong, read-after-write consistency for all connected file system clients -- including for operations which aren't possible to perform on S3 efficiently (such as renames, appends, etc). We asynchronously push those writes to S3, so there may be a few minutes before you can access them directly from the bucket. But, during this time, the file system interface will always reflect the up-to-date view.
So, I assume you use a journal in the cache server.
A few related questions:
* Do you use a single leader for a specific file system, or do you have a cluster solution with consensus to enable scaling/redundancy?
* How do you guarantee read-after-write consistency? Do you stream the journal to all clients and wait for them to ack before the write finishes? Or at least wait for everyone to ack the latest revisions for files, while the content is streamed out separately/requested on demand?
* If the above is true, I assume this is strictly viable for single-DC usage due to latency? Do you support different mount options for different consistency guarantees?
These are questions that are super specific to our implementation, that I'm hesitant to share publicly because they could change any at any time. I can share that we're designed to horizontally scale the performance of each file system, and our custom protocol will enable Lustre-like scale out performance. As for single- vs. multi-DC, I think that you'd be surprised at how much latency budget there is (a cross-DC round trip in AWS can be anywhere from 200us-700us, and EBS gp3 latencies are around 1000us).
Is it fair to say this is best suited for small files that will be written infrequently?
There’s no partial write for s3 so editing a small range of a 1 GiB file would repeatedly upload the full file to the backing s3 right?
Or is the s3 representation not the same hierarchy as the presented mount point? (ie something opaque like a log structured / append only chunked list)
It's hard to define "best", and in many cases, the answers to these questions depend heavily on the workload and the caching parameters (how long do we wait before flushing to S3, etc). We are designed to provide good file system performance, even if customers are repeatedly writing small pieces of data to a 1 GiB file, so "best" in this case is a question of whether or not it's cost efficient.
Without getting too much into the details of the system, our durable cache is designed for 5 9s of durability (and we're working on a version that will provide 11 9s of durability soon). You can't achieve those durability numbers on a single attached NVMe device without some kind of replication.
This part of the article really stands out (emphasis mine):
> Mr. Kennedy has singled out Froot Loops as an example of a product with too many artificial ingredients, questioning why the Canadian version has fewer than the U.S. version. But he was wrong. The ingredient list is roughly the same, although Canada’s has natural colorings made from blueberries and carrots while the U.S. product contains red dye 40, yellow 5 and blue 1 as well as Butylated hydroxytoluene, or BHT, a lab-made chemical that is used “for freshness,” according to the ingredient label.
How can anyone working at the NYT believe what they are writing? They're equating "blueberries and carrots" with "red dye 40, yellow 5 and blue 1 as well as Butylated hydroxytoluene"?
I'm not exactly an RFK fan boy, but if I read things like this in what's supposed to be a respectable newspaper, I know I'm going to think twice about the other writings of that paper.
Yeah .. the paper is all over the place. RFK isn’t horrible. The bear staging in Central Park is odd and everything he stands for on vaccines makes no sense but as an American living in Germany the food here is just cleaner and healthier. Even the Mountain Dew here has cleaner ingredients. Mountain freaking dew.
> Why do people also focus on the fringe science of HFCS and seed oils?
I think both are easy to see examples of ingredients that seem out of place from what our bodies have evolved to eat for millenia.
When you read about how things like canola oil (i.e., rapeseed oil) are so high in erucic acid that they're toxic and must be extracted with hexane to make them edible, it's reasonable to question whether we should be eating them at all. Versus something like olive oil (literally just squeezed olives) or butter (milk from cow).
> both are easy to see examples of ingredients that seem out of place from what our bodies have evolved to eat for millenia
You might as well add everything farmed, raised or processed to that list.
Appeals to nature don't work in nutrition [1]. We've been starving and malnourished since the Neolithic [2]. And almost everything in our food supplies we consider "natural" is engineered.
As a follow up on the descendant study from China, your link shows that children with a mother that experienced childhood famine have an average BMI that is higher by ~1. Children with fathers who experienced childhood famine actually have a lower BMI. This study doesn't control for the obvious cultural and behavioral implications of a parent living through the famines, although epigenetic connections have been proposed elsewhere [0]
Here [1] is another study from China That shows that simply having an obese parent is the single biggest factor in a developing childhood obesity, and nearly doubles the obesity risk.
I didnt bother digging up a study, but this link[2] says that in the USA, a child with 1 obese parent has a 50% chance of being obese (absolute, not increased hazard ratio!) If both parents are obese, a child has an 80% chance of being obese!
This makes a lot of intuitive sense to me. I would expect the modeled behavior, status, household food, and parenting to have a huge impact on children. This naturally overshadows the interesting but tiny impact of epigenetic transmission.
I was expecting you to link the transgenerational study from polish famines during world war II, so thanks for some references I haven't seen before. That said, you have to admit that they are a hell of a lot of variables at play when you're talking about populations that self-reported food insecurity or underwent literal famine.
That said, if you are worried about the next generation, it'd be interesting to compare the magnitude of transgenerational effect from simply being raised in a household with obese parents in the usa. I imagine it's much larger but I'm not on a pc to look it up right now.
Last, if I were to lazily steel Man my appeal to nature, I doubt our Neolithic ancestors yo-yo'd back and forth between skinny and obese generations due to calorie restrictions.
Does the hexane being used have anything to do with erucic acid? We have bred the plant to be low in erucic acid already. Hexane is used in the processing as a way to maximize extraction. Maybe I am wrong.
Regardless that is a tail risk when thinking about the general population.
I'm not aware of him suggesting people harass anybody. There's a wide line between saying crazy things and calling people to take specific action against specific people.
My dude. He was ranting for years to an audience of people self-selecting as susceptible to propaganda about how a specific group of normal ass people was assisting the Government in dismantling their second amendment rights.
Like no he didn't literally say "go torment them" but come the fuck on. The connection between the events here isn't 1/10th as complicated as most of Alex's actual theories, it's literally just a line.
Brine pools typically have an electrolysis system that manufactures chlorine from the salt. As such, it’s a salty with low chlorine levels pool, which is still more pleasant than a high-chlorine pool.
I understand what the salt is for. I don’t understand why you have to add more. If the water evaporates the salt remains. And the split sodium and chlorine also recombine. So how does it disapear?
reply