At the last “real” job I tried to help implement this as part of and later the manager of the ops team. It’s a great start, but in that case management wanted the idea of devops/sre but didn’t actually support it, and it really was a shit show. If you have a bad CTO and leadership on the board level, no amount of re-tooling will paper over their lack of support for the real principles.
In godot (compile main daily) I am running 120 tick rate but have found some interesting optimizations happen there and above (240) and am seriously temped to just make 240 my default (but I need to do more server load tests as I’m planning to scale)
240Hz has issues too though, it means that there's on average 1.66 physics ticks per screen refresh on a 144Hz screen. In practice that means that an object moving at, say, 0.1 meter per tick will have moved 0.1 meters in some frames and 0.2 meters in other frames, which leads to choppy animation. That's why I mentioned tick rates in the kilohertz; sometimes having 10 ticks between frames and sometimes 11 ticks isn't a big deal, but sometimes having 2 and sometimes 1 is noticeable (and obviously if some frames have 1 tick and some have 0 ticks between them, that's a disaster (unless you do interpolation which is a different can of worms)).
Now running physics at 240Hz or even 120Hz might still be the right choice for your game, especially when taking into account human factors and time constraints, it's just not a panacea
What you say is triggering.
Triggered by the darkness.
It’s okay to be afraid of the unknown.
But it’s also time to take a look within ourselves and see that we are co-creating something.
Do we want it?
Is this the best we can do as a human race?
The Irish people have a bit of luck on their side. They follow their hearts and speak up.
The ancient Celtic DNA knows. It’s a magical people, and I mean that in the sense of the Magi; enlightened men and women of knowledge who brought the sciences and the arts to humanity.
The Irish were conquered, but the power and wisdom within the DNA cannot be conquered.
It’s all within.
Like an ancient quantum computer.
It sits. Waiting to be tapped by the brave one who dares to look outside the program.
As a greybeard GNUlinux sysadmin: nftables raw ripping out iptables (newer gui/tui firewall interfaces support nftables) rip out NetworkManager, and use systemd-resolved to manage DNS. (On non-systemd systems like Devuan then this changes.) Use systemd units for powerful program and service control, including systemd-nspawn for containerization.
iptables has been with us for more than 20 years and is only now being replaced (pretty slowly I might add). The old rules are still supported through iptables-nft, you can just import them and forget nft exists.
Distributions I prefer have never used NetworkManager and haven't changed network configuration in a long time. RHEL and its rebuilds have used NM for what feels like an eternity. Ubuntu is the odd one out here with its constant churn, afaik.
Same with firewall wrappers like ufw and firewalld. Either your distribution uses one and you just use whatever has been chosen for you, or it doesn't and you go with nftables (or iptables-nft if you prefer).
This is all only really a problem if your organization uses a bunch of distributions instead of standardizing on one, but then you probably have a lot more other serious problems than learning how to configure your firewall...
As a counterpoint, I evaluated FreeBSD for a project about a year ago and was really put off by its primitive service management (compared to systemd which I know pretty well and use its features extensively, they really do help in your daily work), and the three firewalls which all seem to be approximately equally supported and you never really know where to put your time. (Unfortunately, I had to pass the OS for other reasons which have no relation to its technical merit.)
Yes, however, each has a clear set of tools, and it's clear which one are you using. There are no shims to use IPFW tooling with PF and vice versa, while on linux they are all mixed.
Sorry, for such inconvenience, we will stop writing software we want so that we won't risk filling BSDers brains
I really don't get these criticisms, you have choice, having choices doesn't make a system bad, makes you have to make your choices, which can also be going towards systems where stuff is standard
Choice should only be offered after you have a stable foundation/base. Suppose you have a store that sells frozen food only, an incredible amount of choices, but no base ingredients like flour, grains and meat.
Software is utilitarian in nature, the goal is the task, but, how do you accomplish a task with an infinite amount of tools? and not only that, but how can you be sure that the tool is secure and stable?
I've had nothing but issues with systemd-resolved.
Networkmanager seems to be what things are standardizing on these days. Which, while for some reason I've always avoided networkmanager and used various combinations as alternatives, I'm all for having one most common standard networking utility for Linux.
Same here. However, from what I've seen, touching any systemd component causes cascading issues.
I usually settle on networkmanager, since there's not a great alternative for dealing with wifi. However, it often delegates to a giant broken pile of fail.
Things can be much simpler on machines that connect via ethernet (including VMs).
NetworkManager and systemd-resolved are not really interchangeable. The latter is a local caching multiprotocol name resolver and NetworkManager supports its use for name resolution.
reply