Hacker News new | past | comments | ask | show | jobs | submit | Delemono's comments login

Choose PostgreSQL as long as possible.

This is probably good enough for 99% of things out there running stuff.

For everything else you have architects which will do the right thing for you.


I don't believe that the Luxemburg court is just Rassist.

Luxemburg is not some random backwards country


No, it’s a very special country with almost all of its lawyers on retainer for Paypal. But, since Paypal only answers to litigation inside Luxemburg, there is a problem. If you try to sue them there, you will have a hard time finding a lawyer who will represent you.


Nationalism is also a thing, and tribalism, just saying


.art has the same model. Normal vs. Premium.


I'm a driver and would prefer that.

I drive to go from a to b and not to race.

I care more about not getting a fine than traveling as fast as possible


I am a driver to for some three decades or so and noticed when you watch the speedometer you generally don’t get fines.


I'm in Germany there are plenty of speed changes. Sometimes it's not always clear if I missed like a speedlimit cancelation or not.

Or they reduce from unlimited to 100, 80 and 60 in no time.

It's not a big issue but because I don't care for a hard limit it's for me a better deal


in general this is just a way for the state to collect money from drivers.


I think this is stupid and not true.


it’s not about racing. it’s about having the freedom to go fast if needed.


The single need to go fast (wtf?) Is not as important than the safety for everyone using a street.

For emergencies there are proper laws.

And no your taco bell getting cold is not a good reason


My career.

Not joking just another angle: I worked on that actively.


One of my most frustrating revelations is this:

Every new human has to learn aaaaalll of this over again.

That's also why empires like the Roman was able to disappear.

Knowledge fades and we need actively teach over and over again.

More money to our education system!

And btw people still have no clue how computers work. This hasn't changed too much :(


That sounds more like expected than an issue.

You might not want to overload one instance.

And running it like you do, is a no brainer anyway


Executing a GitHub runner in a container is a no-brainer, but it's still easy to overload the _host_ with too many jobs/containers, so managing resources is always up to the operator.

I understand why GitHub has this constraint - to avoid clashes between jobs. If multiple jobs (from the same repository) are executing simultaneously within the same stateful environment, they are more likely to clash over shared resources (/tmp, cleanup tasks, database names, etc.). However, even if my jobs are clean and idempotent, GitHub is nudging me to think about runners as "VMs" rather than "containers" (because often CI jobs involve their own containers, and docker-in-docker is a pain), and "self-hosting a bunch of CI VMs" becomes expensive rather quick.


well, GHA supports running actions in containers, which is the best way to control the environment your CI/CD runs in, so putting your runner in a container won't work for everyone.

but, if it does work for someone, doing what you've done will give a much better experience to their developers.

my employer uses single-user VMs for it's runners; it works well, but sometimes actions invocation is high and it can take a few minutes for a runner to come around to taking my job. that would be much less of a problem with dockerized runners.


Modern vcpu is much faster than a 2008 CPU.

There is a significant IPC increase with every generation.


I'd put money on an i7 960 from '08 beating a 2 vCPU GitHub Actions instance in raw compute.

My single threaded code runs about half as fast in Actions compared to my 3900x. They're not fast instances.


Your 3900x is from 2019. Op compared a CPU from 2008!


Single threaded performance just hasn't shifted that quickly in the interim.

In order for the quad core, eight thread 960 to be slower than an Actions instance, there'd need to have been an 8x uplift in single core performance since '08. It's been more like 2x.

https://mlech26l.github.io/pages/2020/12/17/cpus.html


Yes, but the 2008 CPU was 4 cores (and 8 GB of ram). Also, that is 4 real cores, compared to two logical cores. Probably still slower, but with much faster network.

~Comparable. Or maybe I got it wrong, both are still dog slow compared to anything people actually use though.


do you have a source for that? Last time I looked, the cloud CPUs were intel based versions that were optimized for energy usage, with a lower clock-speed and lower single thread performance than older (at that time 2012ish) CPUs


It adds complexity to a narrow use case.

If it wouldn't be narrow neo4j wouldn't need to lay off stuff.

Your examples do not refute this


> It adds complexity to a narrow use case.

It also simplifies the unnecessary complexity in many cases, and I have witnessed both. Just like one should not use an expensive Zeiss microscope to hammer nails into a concrete wall as a hammer substitute, one perhaps ought not to stick a graph database everywhere where it does not belong. Engineering (including software) is about selecting the appropriate tooling for each job.

> If it wouldn't be narrow neo4j wouldn't need to lay off stuff.

I fail to see how the two are related. If a company struggles with the execution of their incumbent business model, perhaps it is not necessarily related to the product (may or may not be though)?


They show what it can do, people in Google can use it,.there are papers.

Google publishes plenty of other ml based papers.

Assuming in any way that Google might loose it's reputation because of imagine is very very far fetched.


Not losing its reputation as a whole, but losing the reputation of being the leader in this field? Sure.

When they released BERT there was no doubt that they were the leaders. Even laymen heard about it.

What AI advances do laymen most talk about now? DALL-E, Stable Diffusion and ChatGPT. AlphaCode and LaMDA gave some headlines but not even close. Everyone is too busy trying ChatGPT to pay attention to those.


Does elite AI talent really care what layman new to AI think? If so, problem for Google but I doubt they do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: