No, it’s a very special country with almost all of its lawyers on retainer for Paypal. But, since Paypal only answers to litigation inside Luxemburg, there is a problem. If you try to sue them there, you will have a hard time finding a lawyer who will represent you.
Executing a GitHub runner in a container is a no-brainer, but it's still easy to overload the _host_ with too many jobs/containers, so managing resources is always up to the operator.
I understand why GitHub has this constraint - to avoid clashes between jobs. If multiple jobs (from the same repository) are executing simultaneously within the same stateful environment, they are more likely to clash over shared resources (/tmp, cleanup tasks, database names, etc.). However, even if my jobs are clean and idempotent, GitHub is nudging me to think about runners as "VMs" rather than "containers" (because often CI jobs involve their own containers, and docker-in-docker is a pain), and "self-hosting a bunch of CI VMs" becomes expensive rather quick.
well, GHA supports running actions in containers, which is the best way to control the environment your CI/CD runs in, so putting your runner in a container won't work for everyone.
but, if it does work for someone, doing what you've done will give a much better experience to their developers.
my employer uses single-user VMs for it's runners; it works well, but sometimes actions invocation is high and it can take a few minutes for a runner to come around to taking my job. that would be much less of a problem with dockerized runners.
Single threaded performance just hasn't shifted that quickly in the interim.
In order for the quad core, eight thread 960 to be slower than an Actions instance, there'd need to have been an 8x uplift in single core performance since '08. It's been more like 2x.
Yes, but the 2008 CPU was 4 cores (and 8 GB of ram). Also, that is 4 real cores, compared to two logical cores. Probably still slower, but with much faster network.
~Comparable. Or maybe I got it wrong, both are still dog slow compared to anything people actually use though.
do you have a source for that? Last time I looked, the cloud CPUs were intel based versions that were optimized for energy usage, with a lower clock-speed and lower single thread performance than older (at that time 2012ish) CPUs
It also simplifies the unnecessary complexity in many cases, and I have witnessed both. Just like one should not use an expensive Zeiss microscope to hammer nails into a concrete wall as a hammer substitute, one perhaps ought not to stick a graph database everywhere where it does not belong. Engineering (including software) is about selecting the appropriate tooling for each job.
> If it wouldn't be narrow neo4j wouldn't need to lay off stuff.
I fail to see how the two are related. If a company struggles with the execution of their incumbent business model, perhaps it is not necessarily related to the product (may or may not be though)?
Not losing its reputation as a whole, but losing the reputation of being the leader in this field? Sure.
When they released BERT there was no doubt that they were the leaders. Even laymen heard about it.
What AI advances do laymen most talk about now? DALL-E, Stable Diffusion and ChatGPT. AlphaCode and LaMDA gave some headlines but not even close. Everyone is too busy trying ChatGPT to pay attention to those.
This is probably good enough for 99% of things out there running stuff.
For everything else you have architects which will do the right thing for you.