Isn’t this a forum of hackers caring about the news? Everyone seems excited here and that excitement naturally leads into curiosity for many who identify as hackers.
There’s a bunch of people crapping on you who clearly haven’t been through a flight test campaigns.
100% with you. The teams I’ve worked with would be celebrating and trying to figure out what’s burning at the same time. And especially trying to figure out if there’s anything that they need to do to collect evidence for that investigation (eg zooming the remote PTZ cameras in on specific areas or things like that)
Youtube is an incompetent organization. I've seen it take them nearly 48 hours to restore channels stolen by scammers to their rightful owners (all the while allowing the crypto scammers to continue streaming.)
There used to be a Twitter AppleTV app and I recently saw an X app for TV platforms has been brought back, but I don’t know if the AppleTV version is out yet.
Of course I’d rather they just stream to YouTube in 4K.
I've tried Node when the trend started and it was okay. But the problem was that the actual script was also the server. So if the script hanged, the whole server hanged. Went back to PHP and never looked back.
It sounds like you "tried Node" but didn't learn/read enough to understand the event loop architecture (and why node is asynchronous). Node is pretty simple (and powerful) but yes if you don't know about the event loop, you'll make a blocking call (synchronous function) and end up blocking the loop (which will hang your whole process).
Not that it matters because you don't care and no one uses promises any more but it does really sadden me how much nested promise code I've found over the years.
eg this
function sad(input) {
return new Promise((resolve, reject) =>
foo(input).then((data) => {
bar(data).then((data2) => {
baz(data2).then(resolve, reject)
})
})
);
}
could quite as easily have been written like this:
One of the rules I teach people when they're starting using promises is never ever to write `new Promise(...)`.
That's not because it's bad - it's a useful tool if you're connecting event or callback-based systems to the world of promises and async/await. But a lot of people who are just getting started with promises seem to take a while to get used to chaining, and often resort to using the `new Promise` constructor to get promises to appear in the places they expect. So giving them a blanket rule ("never use `new Promise`") forces them to figure out a different approach.
It's become a lot easier since the introduction of async/await, where chaining isn't so important, but there are still always times when you need to understand that underneath the syntax sugar, there's still promises happening, and so I'm still finding the rule useful.
That's not inherent to PHP, but rather the ecosystem it's usually used in. The standard "LAMP" stack, for example, has Apache in it for the actual server, talking to PHP using a CGI interface. So if your PHP script crashes or hangs, the server itself is still up, and capable of serving other clients.
If you set up a Node script where e.g. Express talks directly to the clients, then yes, the script crashing or hanging means the server becomes unavailable or unresponsive. However, you can also set up a layer in front of Node. See cgi-node for replicating the CGI workflow you might be used to.
There are some advantages to the standard Node model though: the program can manage its own resources, such as keeping a database connection open; it can run asynchronous maintenance tasks; it can see and report the current server load; it can easily combine HTTP(S) communication and Web socket streams; etc.
> The standard "LAMP" stack, for example, has Apache in it for the actual server, talking to PHP using a CGI interface. So if your PHP script crashes or hangs, the server itself is still up, and capable of serving other clients.
Not exactly... the standard, typical LAMP stack makes use of mod_php, so the PHP engine is in-process with one of the Apache process.
The fact that Apache has multiprocess/hybrid workers is actually why the server stays up and can serve more requests.
Some contemporary LAMP stacks use FPM, I guess; most of those in shared hosting ISPs for example because of the possibility of running the script as a user process.
I would imagine that FPM is more common than people realize because it’s also usually faster, and running scripts as a separate user is more secure. For example, if PHP is a user with read-only access to the document root, it is far more difficult for an attacker to do file injection.
php has the opposite problem where if a script hangs on io, you can be cooked (assuming other requests blocked similarly).
Different trade offs for sure, but given IO is often the bottleneck, having it force you to think about running code asynchronously can often result in code that runs more in parallel than equivalent first blush php code.
For instance, if you were writing in a function style a program that fetched entries from a web service, for each of those results, run a summarization pass using another service, and then insert summarization result in a database, a style like shown in this article might serialize the summarization / insert operations, one after the other, where you’d instead want each result to be processed in parallel.
Lots of libraries in node that make that easy and the code straightforward to reason about.
I mean, there are tradeoffs with both approaches. It seems to me that Node’s stateful approach is used by other web servers and languages too.
The main tradeoff is you’re now reloading the entire server for every request in PHP. If you have a massive server or framework, that might not be the fastest thing in the world.
Since php 7.4 there's been opcache preload to keep a lot of the framework instantiated, in php 8.1 opcache inheritance cache covered some ground with preload.
Some frameworks like symfony considered removing preload support but they were still seeing benchmarks of 10% better performance with it so it was kept.
The biggest pain point with preload IMO is it's global, not per pool, and php-fpm needs to be restarted to update the preload script.
You can make this same mistake in PHP blocking operations with 1 worker process in mod_php, php-fpm, or handling sockets directly. The fix is pretty standard: run multiple processes, threads, or use non-blocking operations.
The most common PHP setups are simply running multiple processes behind a httpd daemon, so this problem is less frequently encountered. But it's still lurking at a certain concurrency level.
Even the 3rd-gen is pretty old at this point, and it retains the dumb rounded edges of much older models. The original SE had the nice flat edges, which Apple has wisely returned to.
We have lots of commuter trains near large cities in the US. Many people who work in NYC, Chicago, Washington DC, and the like take commuter trains into and out of these cities from surrounding towns and cities. The issue is outside these big metro centers, most of the US isn't as densely populated to make this feasible.
I think by international standards the idea that American cities usually have trains is not right. Basically there are five urban agglomerations with useful train service in America and the other cities have one train a day (that may depart at 2am) or, usually, no trains. Can you get from Dallas to Tulsa by train? Absolutely not. But a city the population of Tulsa would have several trains per hour if it was in Switzerland or France or Japan.
The real situation is closer to "none" than "lots".
Also remember all the user-account leaks. If you were part of the leak then it is trivial for bad actors to craft the perfect email, when they know what sites you have accounts on.
reply