Personally I'm more pissed off about pensioners on final salary still getting state pension, even though they don't need it. Thats far more fucking expensive and doesn't serve a purpose, well apart from buying votes. means test that shit, right now.
Electricity in the UK is often 3x or more expensive than electricity for the same energy content. We have some of the most expensive electricity in the world.
But the amount of compute needed to serve is not very high. It's all text. The amount of bandwidth and compute needed to serve a Netflix or YouTube is far far harder and they managed just fine.
Netflix and YouTube both built custom CDNs. Netflix uses AWS for control plane only.
Also, respectfully, you have no idea what you're talking about. "Just text" doesn't make it easy to solve. GitHub Actions aren't just text and take a lot of compute.
You're right, github actions do indeed take a lot of compute, but the status incidents do not seem to be limited to just actions.
I never said "just text" makes it easy to solve, just that I felt Netflix and YouTube solved harder (in terms of serving the load) problems, as demonstrated by their custom CDN's and other engineering feats. Youtube gets a similar number of videos uploaded to it a day as github gets commits now (20 vs 39 million, from the 275 million a week number listed elsewhere in this thread), and I can't believe that those are equivalently hard to serve in terms of compute and bandwidth.
I agree that it is not an easy problem to solve when load scales the way it has for them and I feel for the technical guys there, but I don't disagree with the level of dis-satisfaction directed their way when customers who pay GitHub large sums of money don't receive an adequate service.
Nature communications, not Nature. There is quite the large difference between them (and neither is neccessarily a sign of quality, but good ability to market well to an editor).
For the record I have published in Nature Communications (and not Nature) and therefore know a little bit about what it takes to publish papers there.
“Do you really believe no human is going to read your resume at some point in the process and notice the classic AI tells?”
Even here on HN many people don’t recognize AI tells that are obvious. Pretty much 100% of all articles posted on HN have been AI generated for months and months already and people don’t seem to care.
I have very little faith in humanity being able to deal with the chaos that LLMs are going to unleash on society.
Heck, most resumes are probably skimmed at best already.
When I’m hiring, a human recruiter (or the hiring manager) reads most resumes.
For us, there is some sorting by basic keyword analysis and we start near the top, but there is no proverbial black box that rejects candidates outright.
If candidates are ignored by humans, it’s not because AI rejected them, it’s because we are starting with candidates earlier in the list and might not make it to applicant 537.
Rather unlikely to be the case, supported by the original article itself here, since if your statement was to be the case they would find that the human generated resume is 100% less likely to be shortlisted.
Obviously it’s not 100% of all human resumes are going to be filtered out, but it’s quite damning that human resumes are more likely to be filtered out just because they didn’t LLM-ify it.
Honestly it's scary how misunderstood this is by the general public, the media and EVEN scientists.
There is a shocking amount of Computer Vision tasks where the scientists claim you can get X info from a picture of Y and it's like, even with ML/AI you can't extract data where there isn't any. The fact I can add an arbritrary amount of high-calorie fat to a meal without changing the appearance by defintion shows it's pointless. A 1000 calorie and 100 calorie milkshake can look identical, and you'd have no way of working that out via an image even if it was a super-intelligent system.
Similarly I see it in things like extracting material of an object from an image of it in serious research papers, which for the same reason cannot be done, since how an object looks has very little to do with what its made of, else painting and other art would clearly be impossible. The information is just not there within the data.
It’s like CSI “enhance!” AI image upscaling. People will do it, see it fabricated details, and then draw the wrong lesson from it, that “AI fabricates things!” when that is exactly what they asked the model to do and there is no magic math that would extract ground truth that was never in the image to begin with.
reply