Hacker Newsnew | past | comments | ask | show | jobs | submit | CAP_NET_ADMIN's commentslogin

If you actually read the page that you've linked, you'll see that many European countries were just using it to deliver COVID notifications


Along with offensive uses in Greece, Japan, the US, and New Zealand along with some uses on sea going vessels against pirates. It's not just notification uses and even just as a notification messages there are reports of hearing damage when turned too high.


Depends on the board, I've had many mobos which power on when they receive WoL.

Check your bios settings.

Also, you can probably just strap a wifi-capable relay to power pins on your motherboard.


> Also, you can probably just strap a wifi-capable relay to power pins on your motherboard.

That's a lot more work than just buying a different KVM that already exposes its spare GPIO pins for exactly that purpose. There are plenty of options that come bundled with the necessary cables and adapters to connect the KVM to the motherboard headers while leaving the existing power/reset buttons usable.


Maybe they have one timezone, but there are multiple "China's"


Does anyone know a battle-tested tool that would help with (almost)online migrations of postgresql servers to other hosts? I know it can be done by manually, but I'd like to avoid that


PG's internal wal-level replication? primary to a read-replica, and then switch the read to become the primary. You'll have a bit of downtime while you stop connections on the original server, switch the new server to primary, and update your app config to connect to the new server.

I believe that's a pretty standard way to provide "HA" postgres. (We use Patroni for our HA setup)

https://github.com/patroni/patroni


We use the same setup, though we use PGBouncer so after switching primary we just force reconnect all clients from PGBouncer instead.

The clients will have to retry on-going transactions, but that's a basic fault tolerant requirement anyway.


In my experience, everyone's setup is slightly different so it's hard to find a generic solution. pgcopydb is pretty good

I can't remember the name but I saw a Ruby based tool on Hacker News a few months ago that'd automate logical rep setup and failover for you


pglogical can do that (or at least minimize the manual steps as much as possible)

I am not entirely sure, but I think CloudNativePG (a Kubernetes operator) can also be used for that.


Why would you do 50k+ connections if they can't be active all at once either way? Unless you have 50k+ cores and IO beefy enough not to get overwhelmed due to that.

You can have as much connections as you want, but you'll have to trade it for having lower work mem numbers, which hurts performance. Traditional advice is to keep it below 500 per PostgreSQL instance (I'd say physical host).

I've ran dozens of micro services handling thousands of requests per second with a total connection limit of around 200 of which most was still unused - all without any server-side pooler.


because people run large amounts of front ends and workers that create a significant amount of connections. it doesn't matter if they are all active.


Why would you want every "frontend" keep an open connection all the time?

> it doesn't matter if they are all active

It does, if the connection is inactive (doesn't hold an open transaction) you should close it or return it to the pool.


so you are suggesting you close a connection between queries?


Between queries in the same transaction? No

Between transactions? Yes, absolutely

In fact, many libraries do it automatically.

For example, SQLAlchemy doc explicitly says [0]:

> After the commit, the Connection object associated with that transaction is closed, causing its underlying DBAPI connection to be released back to the connection pool associated with the Engine to which the Session is bound.

I expect other reasonably sane libs for working with transactional databases do the same.

So, if you are doing pooling correctly, you can only run out of available connections if you want to have a lot of long running transactions.

So, why would you want every of your 50k frontends keep an open transaction simultaneously?

[0] https://docs.sqlalchemy.org/en/20/orm/session_basics.html#co...


Because there's an overhead to make a connection, authenticate, set the default parameters on the connection, etc. I've never seen a framework that closed db connections between requests.

Of course, the better design is to write a nonblocking worker that can run async requests on a single connection, and not need a giant pool of blocking workers, but that is a major architecture plan that can't be added late in a project that started as blocking worker pools. MySQL has always fit well with those large blocking worker pools. Postgres less so.


As I said, you can return the connection to the connection pool.

From the perspective of keeping the number of open connections low it doesn't really matter if you close it or return to the pool, because in either case the connection becomes available to other clients.


I might not be understanding what you're pointing out here. It sounds to me like sqlalchemy is talking about a pool of connections within one process, in which case releasing back to that pool does not close the connection by that process to the database. Parent comment is talking about one connection per process with 50k processes. My comment was that you don't need that many processes if each process can handle hundreds of web requests asynchronously.

If you are saying that a connection pool can be shared between processes without pgbouncer, that is news to me.


Of course, you're right, it is not possible to to share a connection pool between processes without pgbouncer.

> Parent comment is talking about one connection per process with 50k processes.

It is actually not clear what parent comment was talking about. I don't know what exactly did they mean by "front ends".


The most common design for a Web app on Linux in the last 20 years is to have a pool of worker processes, each single-threaded and ready to serve one request. The processes might be apache ready to invoke PHP, or mod-perl, or a pool of ruby-on-rails or perl or python processes receiving the requests directly. Java tends to be threads instead of processes. I've personally never needed to go past about 100 workers, but I've talked to people who scale up to thousands, and they happen to be using MySQL. I've never used pgbouncer, but understand that's the tool to reach for rather than configuring Pg to allow thousands of connections.


Beauty of CPUs - they'll chew through whatever bs code you throw at them at a reasonable speed.


I don't think this is correct. The difference between well optimized code and unoptimized code on the CPU is frequently at least an order of magnitude performance.

Reason it doesn't seem that way is that the CPU is so fast we often bottleneck on I/O first. However, for compute-workloads like inference, it really does matter.


While this is true, the most effective optimizations you don't do yourself. The compiler or runtime does it. They get the low-hanging fruit. You can further optimize yourself, but unless your design is fundamentally bad, you're gonna be micro-optimizing.

gcc -O0 and -O2 has a HUGE performance gain. We don't really have anything to auto-magically do this for models, yet. Compilers are intimately familiar with x86.


While the compiler is decent at producing code that is good in terms of saturating the instruction pipeline, there are many things the compiler simply can't help you with.

Having cache friendly memory access patterns is perhaps the biggest one. Though automatic vectorization is also still not quite there, so in cases where there's a severe bottleneck, doing that manually may still considerably improve performance, if the workload is vectorizable.


Which Caddy plugins are you using?


I was bullied by 3 separate bullies in my school years, every time it ended when I reached some deep rage in me and hit back.

One time I've thrown the bully down the stairs, other time I've hit him with a chair and the last time it was when the bully stole my phone, started going through the photos my girlfriend sent me and commenting them inappropriately - I've smashed his face so hard, the glasses flew like 4 meters away. I was short and overweight, they've never expected the amounts of power I could generate when in rage mode.

So from my limited, anegdotal experience - it works. It also shows other people that they can stand up to the bully and the bully is not some all-powerful being that has to define their present and future.


We are but puny agents of entropy.


Countries always fighting the most important battles :eyeroll:


Porn is just the justification. It's easy to find something repugnant on whatever streaming video site and then start with the "protect the children" nonsense.

The real issue is always control.


Backward countries being backward. The main flaw of modern liberal societies is that parts of them have stopped believing that liberalism is indeed progress. All hail the moral police and long live cultural relativism or whatever its currently trendy post-structural reconstruction is.


It doesn't help that the term 'liberal' has had its meaning so co-opted that it now refers to people who reject freedom of speech and belief.


True, though I would say that is leftism. Leftists actually hate liberals and use it as a slur, believe it or not.


While they often go together, economic liberalism shouldn't be confused with social liberalism.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: