Along with offensive uses in Greece, Japan, the US, and New Zealand along with some uses on sea going vessels against pirates. It's not just notification uses and even just as a notification messages there are reports of hearing damage when turned too high.
> Also, you can probably just strap a wifi-capable relay to power pins on your motherboard.
That's a lot more work than just buying a different KVM that already exposes its spare GPIO pins for exactly that purpose. There are plenty of options that come bundled with the necessary cables and adapters to connect the KVM to the motherboard headers while leaving the existing power/reset buttons usable.
Does anyone know a battle-tested tool that would help with (almost)online migrations of postgresql servers to other hosts? I know it can be done by manually, but I'd like to avoid that
PG's internal wal-level replication? primary to a read-replica, and then switch the read to become the primary. You'll have a bit of downtime while you stop connections on the original server, switch the new server to primary, and update your app config to connect to the new server.
I believe that's a pretty standard way to provide "HA" postgres. (We use Patroni for our HA setup)
Why would you do 50k+ connections if they can't be active all at once either way? Unless you have 50k+ cores and IO beefy enough not to get overwhelmed due to that.
You can have as much connections as you want, but you'll have to trade it for having lower work mem numbers, which hurts performance. Traditional advice is to keep it below 500 per PostgreSQL instance (I'd say physical host).
I've ran dozens of micro services handling thousands of requests per second with a total connection limit of around 200 of which most was still unused - all without any server-side pooler.
> After the commit, the Connection object associated with that transaction is closed, causing its underlying DBAPI connection to be released back to the connection pool associated with the Engine to which the Session is bound.
I expect other reasonably sane libs for working with transactional databases do the same.
So, if you are doing pooling correctly, you can only run out of available connections if you want to have a lot of long running transactions.
So, why would you want every of your 50k frontends keep an open transaction simultaneously?
Because there's an overhead to make a connection, authenticate, set the default parameters on the connection, etc. I've never seen a framework that closed db connections between requests.
Of course, the better design is to write a nonblocking worker that can run async requests on a single connection, and not need a giant pool of blocking workers, but that is a major architecture plan that can't be added late in a project that started as blocking worker pools. MySQL has always fit well with those large blocking worker pools. Postgres less so.
As I said, you can return the connection to the connection pool.
From the perspective of keeping the number of open connections low it doesn't really matter if you close it or return to the pool, because in either case the connection becomes available to other clients.
I might not be understanding what you're pointing out here. It sounds to me like sqlalchemy is talking about a pool of connections within one process, in which case releasing back to that pool does not close the connection by that process to the database. Parent comment is talking about one connection per process with 50k processes. My comment was that you don't need that many processes if each process can handle hundreds of web requests asynchronously.
If you are saying that a connection pool can be shared between processes without pgbouncer, that is news to me.
The most common design for a Web app on Linux in the last 20 years is to have a pool of worker processes, each single-threaded and ready to serve one request. The processes might be apache ready to invoke PHP, or mod-perl, or a pool of ruby-on-rails or perl or python processes receiving the requests directly. Java tends to be threads instead of processes. I've personally never needed to go past about 100 workers, but I've talked to people who scale up to thousands, and they happen to be using MySQL. I've never used pgbouncer, but understand that's the tool to reach for rather than configuring Pg to allow thousands of connections.
I don't think this is correct. The difference between well optimized code and unoptimized code on the CPU is frequently at least an order of magnitude performance.
Reason it doesn't seem that way is that the CPU is so fast we often bottleneck on I/O first. However, for compute-workloads like inference, it really does matter.
While this is true, the most effective optimizations you don't do yourself. The compiler or runtime does it. They get the low-hanging fruit. You can further optimize yourself, but unless your design is fundamentally bad, you're gonna be micro-optimizing.
gcc -O0 and -O2 has a HUGE performance gain. We don't really have anything to auto-magically do this for models, yet. Compilers are intimately familiar with x86.
While the compiler is decent at producing code that is good in terms of saturating the instruction pipeline, there are many things the compiler simply can't help you with.
Having cache friendly memory access patterns is perhaps the biggest one. Though automatic vectorization is also still not quite there, so in cases where there's a severe bottleneck, doing that manually may still considerably improve performance, if the workload is vectorizable.
I was bullied by 3 separate bullies in my school years, every time it ended when I reached some deep rage in me and hit back.
One time I've thrown the bully down the stairs, other time I've hit him with a chair and the last time it was when the bully stole my phone, started going through the photos my girlfriend sent me and commenting them inappropriately - I've smashed his face so hard, the glasses flew like 4 meters away. I was short and overweight, they've never expected the amounts of power I could generate when in rage mode.
So from my limited, anegdotal experience - it works. It also shows other people that they can stand up to the bully and the bully is not some all-powerful being that has to define their present and future.
Porn is just the justification. It's easy to find something repugnant on whatever streaming video site and then start with the "protect the children" nonsense.
Backward countries being backward. The main flaw of modern liberal societies is that parts of them have stopped believing that liberalism is indeed progress. All hail the moral police and long live cultural relativism or whatever its currently trendy post-structural reconstruction is.