Hacker Newsnew | past | comments | ask | show | jobs | submit | Jeslijar's commentslogin

Pretty much. America is destined for a decline. The billionaires can make money regardless of border by always moving things around and utilizing their expansive resources for any possible loopholes and escape hatches while manipulating public policy.


This is reductive and wrong. The billionares make money hand over fist either way. They own the companies. They don't care if the new campus or factory is in China or India. They skim their cut off it's productivity either way.

It's your fellow countrymen who are peddling the policies that, at the margin, push those investments overseas.


I'll let docker's security team know that an insecure, obsolete docker image is being served and the maintainers have officially acknowledged they will no longer support it.

Best to get insecure and vulnerable software out of the hands of those who may not be familiar with this CVE or their change in policy that has not gotten a press release in any way.


You're letting docker's security team know that they're serving Ubuntu 14.10? https://hub.docker.com/layers/library/ubuntu/14.10/images/sh...


there is a major difference between having an old image available and having it tagged as latest with no updates beeing available on a channel that before that published all updates with nearly no time delay


Someone seem to already be at it on Discussions https://github.com/minio/minio/discussions/21655

    > I felt it might be appropriate for me to reach out as one of the stewards of the Docker Official Images program.


So that's not the same thing. Docker "official images" are a category of curated docker images. Minio is not one of them. The official curated images are here: https://hub.docker.com/u/library

The minio image is basically a community one that anyone could have created, but still shows in overall docker hub. It's created by minio themselves. I'm kind of surprised they haven't removed it, but with over a billion downloads they are easily in the top ten of whatever category they fall under creating substantial free advertisement.


Did you read the discussion? The docker steward is proposing making a docker "official" minio image to replace the minio/minio image.


Yes, I read it. Last time it was raised the same guy who announced they are doing source only distribution said they could definitely do it, then another member of the team closed it saying it wouldn't happen.

Given the developers have not replied to the thread after a day and the one who was enthusiastic is now the one doling out the information that they are no longer supporting their docker image, I highly doubt they will perform a 180 on policy and suddenly work with them to provide an official curated image. If they wanted to keep the docker image alive they would have continued updating it and not shut down community feedback begging for them to maintain it.

Docker has a vested interest in keeping popular images maintained and a billion+ download package suddenly becoming defunct is noticable to them. Minio seems to be prioritizing their commercial offering and removing support for their open source offering though. Nuking their community documentation doesn't spell anything good for the future of minio for the FOSS community.


Oh that will be an interesting discussion to watch.


> Best to get insecure and vulnerable software out of the hands of those who may not be familiar with this CVE or their change in policy that has not gotten a press release in any way.

Why is that the best? MinIO is not the type of thing that people ought to be directly making available on the Internet anyway, so CVEs are mostly irrelevant unless you are an organization that has to keep on top of them, in which case you certainly have a process in place to do so already.

People straight pulling an image off Dockerhub (so not a particularly sophisticated use-case) to run seem like they'd be the least likely to be impacted by a CVE like this. The impact is apparently "[it] allows the attacker to access buckets and objects beyond their intended restrictions and modify, delete, or create objects outside their authorized scope". Are people pulling from Dockerhub even setting up anything but the absolute most basic (Allow All) ACL?


Zero trust is the way to assess threat. Not Internet access or not.


No, it is a defense strategy. For e.g. hobbyists, it's basically irrelevant, and having something on a private LAN is fine. There is almost no chance of an issue. Not everything in the world needs to be maximally secured, and the people who are using those IAM policies are probably not pulling a vanilla image off Dockerhub to run something as fundamental as their storage layer. They probably also have firewalls tightly locking down which machines are able to talk to MinIO on top of token auth.

The cargo-culting around security is so bizarre to me. In a context where e.g. your organization needs to pass audits, it's cheaper/easier to just update stuff and not attempt to analyze everything so you can check the box. For everyone else, most security advisories are just noise that usually aren't relevant to the actual way software is used. Notably, no one in these discussions is even bringing up what the vulnerability is.


Notably, no one in these discussions is even bringing up what the vulnerability is

That's because of two things. The first is, assessment takes a deep dive into the issue, not a summary. Conjoined with the second, in that you must be ready to update if required, without issue.

In every case, it's less time cost even for home lab users to update instead of assess.

If it isn't, you're using terrible software, for example software which pushes security updates along with API and code changes. Such software doesn't take user security seriously, and should be avoided at all costs.

There's no way around it. Just do it right, don't half ass with excuses. Don't use terrible software. If it's plugged into a network, zero trust it is.


Regrettably Docker has let me know they are uninterested in taking any action.

"Hello,

This does not qualify as an infringement to our Terms of Use policy. Deprecating such images and repo(s) is the responsibility of the owner and we recommend you reach out to them. Docker advises its users to opt into using images under our official programs and offerings such as Docker Official Images and Docker Hardened Images.

Thank you, Security@Docker"

In their ToU under section 6.6, they outline how they may scan images for vulnerabilities and request the owners of said packages fix it, or simply remove it from their site. They clearly do not do this though even when notified of the high criticality vulnerability.


Unfortunately I don't think they're going to get involved there. There are already multiple "official" images on Docker Hub that are unmaintained and have plenty of CVEs (e.g. Centos https://hub.docker.com/_/centos/tags)

I think the most they'd do is add the DEPRECATED note to the Docker hub page as they have done for things like Centos


Imagine the absolute chaos if docker would do that, pull vulnerable images offline. Not a single company would be able to build their software anymore.

Actually, Docker did something like that, where they limited the amount of docker images they would host for you for free to a reasonable number. The result was pretty similar to this current outcry: https://news.ycombinator.com/item?id=24143588


...Or just spend 10 minutes and familiarise yourself with the basic docker build command? Its really dead simple.


Then you have to maintain a pipline and registry just to fix something that should be fixed upstream?


Again folks, you don´t "fix" anything by building a docker image. The fix is already in the source, you just need to run one command to build the image. The registry is something you should have in your infrastructure, if you are at least half-way seriously doing anything in the domain of containers and Kubernetes. But if you dont have one, it seems you are running things locally, for your toy project.Well then, just in that case just deploy from your local docker cache. All of this is actually merely a couple of commands in your simplified use-case.


The fix is upstream, they're giving away the patch for free.

Setting up a registry and a pipeline is annoying but it's hardly a life changing event. It's certainly easier than migrating to a competitor.


Why is a month's expiration better than a year or two years?

Why wouldn't you go with a week or a day? isn't that better than a whole month?

Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?

Why not have certificates dynamically generated constantly and have it so every single request is serviced by a new one and then destroyed after the session is over?

Maybe the problem isn't that certificates expire too soon, maybe the problem is that humans are lazy. Perhaps it's time to go with another method entirely.


I think it's all about change management

a whole month put you in the "if you don't have the resource to automate it, it's still doable by a human, not enough to crush somebody, but still enough to make the option , let's automate fully something to consider"

hence why it's better than a week or a day (it's too much pressure for small companies) better than hours/minutes/secondes (it means you go from 1 year to 'now it must be fully automated right now ! )

a year or two years was not a good idea, because you loose knowledge, it creates pressure (oh my.... not the scary yearly certificate renewal, i remember last year we broke something, we i don't remember what...)

A month, you either start to fully document it, or at least to have it fresh in your mind. A month give you time to everytime think "ok, we have 30 certicates, can't we have a wild card, or a certificate with several domain in it?"

> Perhaps it's time to go with another method entirely.

I think that's the way forward, it's just that it will not happen in one step, and going to one month is a first step.

source: We have to manage a lot of certificate for a lot of different use cases (ssh, mutual ssl for authentification, classical HTTPS certificate etc. ) and we learned the hard way that no 2 years is not better than 1 , and I agree that one month would be better

also https://www.digicert.com/blog/tls-certificate-lifetimes-will...


I think the less conservative stakeholders here would honestly rather do the six-day thing. They don't view the "still doable by a human" thing as a feature; they'd rather everyone think of certificate management as something that has to be fully automated, much like how humans don't manually respond to HTTP requests. Of course, the idea is not to make every tiny organization come up with a bespoke automation solution; rather, it's to make everyone who writes web server software designed to be exposed to the public internet think of certificate management as included within the scope of problems that are their responsibility to solve, through ACME integration or similar. There isn't any reason in principle why this wouldn't work, and I don't think there'd have been a lot of objections if it had worked this way from the beginning; resistance is coming primarily from stakeholders who don't ever want to change anything as they view it as a pure cost.

(Why not less than six days? Because I think at that point you might start to face some availability tradeoffs even if everything is always fully automated.)


> it creates pressure (oh my.... not the scary yearly certificate renewal, i remember last year we broke something, we i don't remember what...)

Ah yes, let's make a terrible workflow to externally force companies who can't be arsed to document their processes to do things properly, at the expense of everyone else.


But it's a decent trade-off and you're using sarcasm in place of fleshing out your claim.

Monthly expiration is a simple way to force you to automate something. Everyone benefits from automating it, too.


I just recently had a executive level manager ask if we could get a 100 year cert for our ERP as the hassle of cert management and the massive cost of missing a renewal made it worth it.

He said six figures for the price would be fine. This is an instance where business needs and technology have gotten really out of alignment.


How on earth would that make more sense than properly setting up ACME and forgetting about the problem for the next hundred years?? If your bespoke ERP system is really so hostile toward cert changes, put it behind a proper reverse proxy with modern TLS features and self-sign a certificate for a hundred years, and be done with it.

It'll take about fifteen minutes of time, and executive level won't ever have to concern themselves with something as mundane as TLS certificates again.


Support contract states we cannot put it behind a proxy. We used to use HAProxy and multiple web server instances, but the support switched to India and they claimed they could no longer undertsand or support that configuration. Since it is a main system for the entire org and the support contract is part of our financial liability and data insurance, the load balancer had to go. This is corporate enterprise IT. Now you know why sysadmins are so grumpy.


Most safety & security dysfunction stories: high level management-tier misaligned incentives, incompetence, and ignorance, overriding the expert advice of mere peons, leading to predictable catastrophes (not to mention, usually, extra costs in the meantime—just hidden ones).

Most solutions: make the peons watch a training video or attend a training session about how they should speak up more.


My condolences:)


> How on earth would that make more sense than properly setting up ACME and forgetting about the problem for the next hundred years?? If your bespoke ERP system is really so hostile toward cert changes, put it behind a proper reverse proxy with modern TLS features and self-sign a certificate for a hundred years, and be done with it.

I completely agree with you but you would be astonished by how many companies, even small/medium companies that uses recent technologies and are otherwise pretty lean, still think that restarting/redeploying/renewing as less as possible is the best way to go instead of fixing the root issue that makes restarting/redeploying/renewing a pain in the ass.


I don't know about OP but I've also worked plenty of places where I seem to be the only person who understands TLS.

And not even at the "math" level. I mean, like, how to get them into a Java keystore. Or how to get Apache or nginx to use them. That you need to include the intermediate certificate. How to get multiple SANs instead of a wildcard certificate. How to use certbot (with HTTP requests or DNS verification). How to get your client to trust a custom CA. How to troubleshoot what's wrong from a client.

I think the most rational takeaway is just that it's too difficult for a typical IT guy to understand, and most SMBs that aren't in tech don't have anyone more knowledgeable on staff.


> I think the most rational takeaway is just that it's too difficult for a typical IT guy to understand, and most SMBs that aren't in tech don't have anyone more knowledgeable on staff.

Where would that kind of thinking lead us..? Most medical procedures are too complex for someone untrained to understand. Does that mean clinics should just not offer those procedures anymore, or should they rather make sure to train their physicians appropriately so they’re able to… do their job properly?


Well I mean there's no inherent requirement that PKI work the way it does. We've mostly just accepted it because it's good enough.

Even if your server admins fully understand TLS, there are still issues like clock skew on clients breaking things, old cipher suites needing to be reviewed / sunset, users clicking past certificate warnings despite training, and the list of (sometimes questionable) globally trusted CAs that the security of the Internet depends upon.

Of course they should do their job properly, but I'm skeptical that we (as software developers) can't come up with something that can more reliably work well.


Yeah I have one specific enterprise app (the updater service for another piece of software) that will not work unless TLS 1.1 is turned on at the OS level. It doesn't do anything with it, but some hard coded phone-home function in the software must fire up each time it checks for updates (even though it doesn't use TLS for the connection, but unencrypted FTP) or it will hard fail and not even log the failure.


I have to schedule at least 30 days out on any change or restart for main systems and I may be overruled by ANY manager.

I actually watched for crashes (thank you inventory control department shenanigans) so that I can sneak in changes during a reset.


> […] that restarting/redeploying/renewing as less as possible is the best way to go instead of fixing the root issue that makes restarting/redeploying/renewing a pain in the ass.

I mean… There's a tradeoff to be sure. I also have a list of things that could be solved properly, but can't justify the time expense to doing so compared to repeating the shortcut every so often.

It's like that expensive espresso machine I've been drooling over for years—I can go out and grab a lot of great coffee at a barista shop before the machine would have saved me money.

But in this particular instance, sure; once you factor the operational risk in, proper automation often is a no-brainer.


Yep this. This is just "we have so much technical debt, our square pegs should fit into all round holes!"

Business culture devaluing security is the root of this and I hope people see the above example of everything that's wrong with how some technology companies operate, and "just throw money at the problem because security in an annoying cost center" is super bad leadership. I'm going to guess this guy also have an MFA exception on his account and a 7 character password because "it just works! It just makes sense, nerds!" I've worked with these kinds of execs all my career and they are absolutely the problem here.


IT serves business needs... not the other way around. If anything, cloud services and mobile device access has made securing anything just about impossible.


Classic case of business not understanding that it doesn't just need access to the data, it needs secure access to the data.


That's what insurance and lawyers are for... so sayeth my management that pay me.


Start your own business - nginx proxy in front of ERP where you handle the SSL for them, put $$ in a trust to ensure there's enough money to pay for someone to update the cert.


>Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?

Then if your CA went down for an hour, you would go down too. With 47 days, there's plenty of time for the CA to fix the outage and issue you a new cert before your current one expires.


Lots of ACME software supports configuring CA fallbacks, so even if a CA is down hard for an extended period you can issue certificates with the others.

Using LetsEncrypt and ZeroSSL together is a popular approach. If you need a stronger guarantee of uptime, reach for the paid options.

https://github.com/acmesh-official/acme.sh?tab=readme-ov-fil...


If everyone uses that with 1 minute or 1 second expirations, I could certainly see a case where an outage in 1 CA causes traffic migration to another, causing performance issues on the fallback CA too.

>If you need a stronger guarantee of uptime, reach for the paid options.

We don't. If we had 1 minute or 1 second lifetimes, we would.


Oh, agreed. I was responding to the part about extended outages.


The exact time probably has no "best" but from past times: I have seen so many places where multi-year certificates were used and people forgot about them, till some service suddenly stopped working and then people having to figure out how to replace that cert.

A short cycle ensures either automation or keeping memory fresh.

Automation of course can also be forgotten and break, but it's at least somewhere written down in some form (code) rather than personal memory of a long gone employee who previously uploaded certs to some CA website for signing manually etc


Sure, there is an argument about slippery slopes here. But the thing about the adage of "if you slowly boil a frog..." (https://en.wikipedia.org/wiki/Boiling_frog) is that not only is the biological metaphor completely false, it also ignores the fact that there can be real thresholds that can change behavior.

Imagine you run an old-school media company who's come into possession of a beloved website with decades of user-generated and reporter-generated content. Content that puts the "this is someone's legacy" in "legacy content." You get some incremental ad revenue, and you're like "if all I have to do is have my outsourced IT team do this renewal thing once a year, it's free money I guess."

But now, you have to pay that team to do a human-in-the-loop task monthly for every site you operate, which now makes the cost no longer de minimis? Or, fully modernize your systems? But since that legacy site uses a different stack, they're saying it's an entirely separate project, which they'll happily quote you with far more zeroes than your ads are generating?

All of a sudden, something that was infrequent maintenance becomes a measurable job. Even a fully rational executive sees their incentives switch - and that doesn't count the ones who were waiting for an excuse to kill their predecessors' projects. We start seeing more and more sites go offline.

We should endeavor not to break the internet. That's not "don't break the internet, conditional on fully rational actors who magically don't have legacy systems." It's "don't break the internet."


Pretty much any legacy system can have a modern reverse proxy in front of it. If the legacy application can't handler certs sanely, use the reverse proxy for terminating TLS.


"Just use Nginx" was not a viable option here, without additional Certbot etc. orchestration, until 14 days ago! And this is still in preview! https://blog.nginx.org/blog/native-support-for-acme-protocol

And, if you haven't been using a reverse proxy before, or for business/risk reasons don't want to use your main site's infrastructure to proxy the inherited site, and had been handling certificates in your host's cPanel with something like https://www.wpzoom.com/blog/add-ssl-to-wordpress/ - it is indeed a dedicated project to install a reverse proxy!


Every year is too infrequent to force automation, leading to admins forgetting to renew their certs. Every minute/day may be too demanding on ACME providers and clutters transparency logs. Dynamic certs just move the problem around because whatever is signing those certs just becomes the SSL cert in practice unless it happens over acme in which case see the point above.


I'm not sure if you're arguing in good faith, but assuming you are, it should be pretty self-evident why you wouldn't generate the certificate dynamically each request: it would take too much time to do so, and so every request would be substantially slower, probably as slow as using Tor, since you would need to ask for the certificate from a central authority. In general it's all about balance, 1 month isn't necessarily better than 1 year, but the reduced timeframe means that there's less complexity in keeping some renovation list and passing it to clients, and it's not so short to require more resources on both the issuer and the requester of the certificate.

> Perhaps it's time to go with another method entirely.

What method would you suggest here?


> since you would need to ask for the certificate from a central authority

Could it work that your long-term certificate (90 days, whatever) gives you the ability to sign ephemeral certificates (much like, e.g. LetsEncrypt signs your 90 day certificate)? That saves calling out to a central authority for each request.


Without knowing the technical details too much: Maybe, although I don’t think it would make much difference in my argument, since it would still add too much time to the request. Likely less, but still noticeable.


dunno why you're being so obnoxious about it?

a month is better than a year because we never ever ever managed to make revocation work, and so the only thing we can do is reduce the length of certs so that stolen or fraudulently obtained certs can be used for less time.


On the vulnerability ladder since SSL was introduced, how common and how disastrous have stolen or fraudulent certs really compared to other security problems, and by how much will these changes reduce such disasters?


China currently has a large APT campaign using a comprised CA (Billbug).

https://www.darkreading.com/endpoint-security/china-based-bi...


I agree with the article, this is "potentially very dangerous". Potential is not actual though, and I'm asking about what damage has actually materialized. Is there a cost estimate over the past 20 years vs. say, memory safety vulnerabilities?


Is this some sort of troll comment?

I'm sure that you are perfectly able to do your own research, why are you trying to push that work onto some stranger on the internet?


Is this a troll article? The article asked basically the same question:

    I also wonder how many organizations have had certificates mis-issued due to BGP hijacking. Yes, this will improve the warm fuzzy security feeling we all want at night, but how much actual risk is this requirement mitigating?
Scope creep with diminishing returns happens everywhere.


There was an attempt doing it differently by CRL but it turns out certificate revoking is not feasible in practice on web scale.

Now they are doing next plausible solution. Seems like 47 days is something they found out by let’s encrypt experience estimating load by current renewals but that last part I am just imagining.


CRL distribution at web scale is now possible thanks to work by John Schanck at Mozilla https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...

But CRL sizes are also partly controlled by expiry time, shorter lifetimes produce smaller CRLs.


Oh wow that’s really fresh. I was still stuck on cascade bloom filers


> Why wouldn't you go with a week or a day? isn't that better than a whole month?

There is in fact work on making this an option: https://letsencrypt.org/2025/02/20/first-short-lived-cert-is...

> Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?

> Why not have certificates dynamically generated constantly and have it so every single request is serviced by a new one and then destroyed after the session is over?

Eventually the overhead actually does start to matter

> Maybe the problem isn't that certificates expire too soon, maybe the problem is that humans are lazy. Perhaps it's time to go with another method entirely.

Like what?


As the limit approaches zero you re-invent Kerberos.


Hey, maybe you can have a better hiring practice than datadog with a 5 question test where if you get a single answer wrong in even the smallest of ways you get disqualified from getting a job with them for 6 months.

I'm guessing they lost a wealth of great talent due to this test on how to support a platform that they give to fresh off the street applicants rather than having even a modicum of training about their product. They want you to study it for free, probably as a marketing tactic - but also so they don't have to pay to train employees. it's great like cancer.

Disclaimer: I have never applied to a role with datadog, nor interviewed with them. Just had multiple friends complete the process with mixed results. Seems like you need to put in ~two full weeks of self directed study to pass their on site interview 'exam' where they don't tell you about the exam being 100% or fail (but it is!)


Based on the link it sounds like they want to carry on with what they've always done but get more money. I get the desire but I doubt the content is going to go back to what it was 25 years ago.

The way I see it today, ideological reviews are what is mainstream. It's not about how the game is, it's how inclusive they are to minorities that is what reaches the front page. It's fucking bizarre but it makes sense when the goal is to reach as many people as possible. On both sides of politics it's just rage bait depending on which way the needle swings. Rage bait creates the clicks that brings the mass appeal not just "nerdy gamers."

I'm a nerdy gamer. I just want to know how good of a game stuff is. When you go to actual niche forums you find real information without the bullshit. You aren't going to find that at all on mainstream mass media sites like the Verge, Motherboard/Vice, Kotaku, et al. I think the goal of aftermath is to just create another mainstream outlet. I hope they prove me wrong.


I'd rather watch some gameplay footage with no commentary. I don't find that a person reviewing a video game contributes anything meaningful to the decision of whether or not I want a game or not unless I know them personally and we have similar taste.

I have a suspicion that the only way such content has any value today in a world with quick and easy access to online video is that they are given copies prior to the release of the game. They get clicks because of peoples impatience to learn more despite being able to see effectively infinite amount of content as soon as a game is officially released.

But the problem here is that game reviewers livelihood is entirely dependent on getting early access and if they don't speak highly of the game then their livelihood can be cut off. They are basically all bribed into giving positive feedback. I think it is actually worse than nothing at all. It is actively harmful to finding out what a game is actually like.

I guess my point is that we should just wait until a game is released. There are infinite copies and there exists more games than anyone could ever play. Reading some heavily biased opinions about a video game a week before it comes out is not meaningful.


> I'd rather watch some gameplay footage with no commentary. I don't find that a person reviewing a video game contributes anything meaningful to the decision of whether or not I want a game or not unless I know them personally and we have similar taste.

Are you really willing to watch 10+ hours of footage, just to figure out the worth of a game? Because that's the actual worth of a review, telling you not just about the game itself, but also how well it continues after the initial tutorial-phase.


I am willing to take that risk. It's not really a big concern to begin with.

I'd consider someones review if they bought the game themselves, they don't generate any revenue from it and we have had similar taste in the past. Even the most brief word of mouth praise of something from a friend is going to be more valuable than any length review from a biased source. It might also be possible to find a collection of random people on Steam to follow just for their reviews. I've never tried this, but I think it would abide by my guidelines of trustworthiness.


I don't think that is very accurate, in my experience the most popular reviews are Personality Driven and it is largely done in the youtube space, the next most popular writen sites are also more neutral when it comes to their reviews (Gamespot, IGN, Gameinformer, Eurogamer). Honestly people just want two things, to hear their favorite content creator talk about the game or have someone say how great it is. But in my opinion the best way to sort out all the noise is to just find a content creator you share tastes with since IGN et al have such a massive cast of reviewers constantly shifting around so it doesn't make sense to rely on that.

You could also use a review aggregator if you just want to see what the overall consensus is: https://opencritic.com/

My other advice which has worked very well as a PC Gamer is to look up Steam reviews, there are a LOT of underappreciated games that were poorly received by critics but players actually enjoyed. It is also review bomb resistant since you actually need to buy the game,will allow you to filter out by time periods, and notify you if there have been a noticeable change in review patterns.


I guess modern gaming journalists are journalists first, gamers second. It used to be the other way around.

I miss the sh*t out of printed specialist journalism. Growing up, we had a couple of printed magazines. You could tell they were incredibly well written, with great reviews, walkthroughs, interviews with people in the local industry, as well as surprisingly in-depth exposes about things like for example, how CG animations are made etc.

Unfortunately, thanks to a leaky water pipe, I can only cherish the memory of said collection.

Their business model was exchanging product (the magazine) for money. Based on that business model, they could pay their employees, a couple of passionate and talented people a living wage. something that I'm not sure modern internet-based outfits accomplish.


> ideological reviews are what is mainstream. It's not about how the game is, it's how inclusive they are to minorities

This seems like a pretty heavy exaggeration to me. I just spotchecked several reviews on IGN, PCGamer, and GamesRadar (top 3 sites I get when googling "games reviews"), and found nothing of the sort. I'm sure if you go digging you could find that content somewhere on the site, but to claim it's the crux of their review model is just false.

For example, the new MW3 is a game where there's lots you could comment on in terms of the ideology of their portrayals of the war on terror and "ends justify the means"... but not one of the mainstream reviews I can see spends time on it, other than to mention the villain's vague motives. Or on the flip side, take Fae Farm, a game that explicitly advertises itself as inclusive - only one paragraph of GamesRadar's the nine page review is on the topic of inclusivity, and it doesn't go any ideologically deeper than "there's a lot of customization here, including androgynous options and things like dreadlocks and turbans". The entire rest of the review is about gameplay and graphics.


Yeah, agreed.

The first name in that list is a guy who gave great reviews of a game by a developer who was sleeping with him.

Of course, due to one (or both? I forget which) of them being married, of course that fact had to be a secret.

In any other publication, the author's sexual liasons with the subject of the piece is considered a clear conflict of interest. Not so in gaming, it appeared.


What, there are still people that believe this? That Nathan Grayson never wrote reviews on any game by Zoe Quinn is easily verifiable, like you can just go look at the webpage. The claim has had zero veracity for years, that its false is widely known and documented, theres multiple Wikipedia articles discussing how and why its wrong.

I don't think I've ever read an article on Kotaku, I never read any games news and barely even play them anymore other than occasional Doom Eternal to blow off steam. But I know Grayson and Quinn's names because of how widely and loudly this argument getting disproved was.

I'm honestly kind of amazed to see someone still claiming this. Like I'm back editing more stuff into this because my brain is so confused and still thinking about it.


Review? No. But possibly "positive coverage"-> https://archive.ph/LXLli

The top screenshot comes from DQ, and the post lists it as a "standout".


I hate that I spent the time to check but this was posted a few months before Grayson and Quinn were together. Even if it weren't its really not much of a smoking gun.


Wow! I thought you were joking but the first name is the dude at the center of what started “gamergate” and everything that followed it.

Thanks for pointing that out.

This site is just another in a long list of game journalism sites that is worthless.


How are there still people that believe this? I seriously thought the claims about Grayson were so widely discredited that even the people still saying gamergate is about "ethics in journalism" stopped making them because it was so embarrassing. Like its obviously not true, go look at the Kotaku website, he never wrote any reviews about Quinn's games, it takes like 30 seconds to check.


No, they didn't stop, actually. Even here:

https://news.ycombinator.com/item?id=38154072


I thought a deranged online mob started Gamergate.


I recall a lot of it was driven by Kotaku, as well as the whole gang of cancel-culture twitter journalists.

It may be wrong, or right, but that is the narrative in my head around this. So it's probably the same in a lot of other disgruntled peoples' minds.


Kotaku had the wrong author write a review of a niche video game that nobody cared about. Gamergate was started by people who thought something totally unimportant like that was reason to send death threads to people.


Could you post a link to this review? I would like to verify this information.


I for one don't understand how sustainability and a yearly release schedule can go hand in hand. Why isn't it a 2 or 3+ year release cycle? Remove the need to upgrade, make spare parts and reuse and recycling more common. Maximize lifespan of devices.

Yearly upgrades of consumer electronics is a pretty non-sustainable idea. Imagine if every year everybody tossed their television, monitor, computer, phone, tablet, headphones, speakers and all their other smart devices into a landfill and bought new ones. Now realize this actually happens with phones in some cases and batteries in almost all cases.


Just change your perspective to one of a scumbag C-level executive of a public facing company and you will quickly understand.

> I for one don't understand how sustainability and a yearly release schedule can go hand in hand

The sustainability report is more for green washing and a thinly veiled deceptive tactic to hide their lust for greed

> Why isn’t it a 2 or 3+ year release cycle?

again, adjust your perspective. The point is to push hardware sales and pump the quarterly numbers. Making the phones easily repairable means significant decrease in NEW phone sales which generate the $$$. Let’s be honest, Apple C-level execs don’t give a fuck about the environment, human rights, and any of that. It’s all a show.

> Yearly upgrades of consumer electronics is a pretty non-sustainable idea

Apple C-level execs know this. Apple marketing division knows this. Consumers know this. Yet people continue to buy their greenwashing campaign every year and consumers are convinced it’s okay. Oh it’s “carbon neutral” now. Oh Apple installs solar panels at their shitty office, “I am buying into a green company guyzzz!!! save the planet one iPhone at a time”

Support right to repair. Support government regulations. Do not expect these private companies to “do the right thing”


Just because they release a new phone each year means you have to buy it.


They offer a (fairly generous by industry standards) trade in program. They take thousands and thousands of perfectly working iPhones, then turn them into raw scrap, just so they aren’t floating around on the secondary market. Apple realises that most iPhone users aren’t going to switch to android, so reducing the supply of secondary market iPhones, really helps them sell more new devices.


The refurbish and sell the traded in devices. It's basically like selling the same thing twice.


Yeah, they refurbish some of them.

They also scrap a whole bunch of them too[1]. While undoubtedly many of the shredded devices are beyond repair, many of the devices destroyed in the video have no obvious physical damage.

I don't think they release any numbers on the amount of traded in devices that are refurbished and destroyed.

[1]: https://www.youtube.com/watch?v=fUXiYecGZs8


New iPhones are announced yearly, but the average smartphone consumer keeps their device for between 2 and 3 years.[0] iPhones in particular have extended lifecycles; an informal 9to5 Mac poll in 2021 (biased towards enthusiasts) had roughly 4 in 5 people waiting at least 2 years, with almost half of polled users 3 years or more.[1]

So what's actually happening is that the yearly iPhone rush is only a small fraction of the install base upgrading. Sure, there are a few uber enthusiasts that may upgrade every year, but those are a minority, and it's not like those phones go direct to landfills - they're resold. And since there are more opportunities to upgrade, fewer are attempting to upgrade simultaneously, straining supply chains and making Apple's income fluctuate more heavily.

[0]: https://www.statista.com/statistics/619788/average-smartphon... [1]: https://9to5mac.com/2021/04/18/poll-how-often-do-you-upgrade...


I agree, and I think a longer release cycle of, say, 3 years could actually have the exact opposite effect, cutting the average lifespan of iPhones.

The annual release cycle doesn't force people's hands. You can have a look to see if there's anything compelling this year and if not, just put off your purchase for another year. No big deal.

If the choice was between 3 and 6 years, most people would probably get a new phone almost automatically after 3 years rather than facing the prospect of sticking it out with a very obsolete phone.

A two year cycle would probably have the same effect, only less pronounced.

Also, I think less frequent releases would come with a far bigger marketing push and some actual innovation. This year it's "A16 Bionic for powerful, proven performance".


Anecdata: my current iPhone is 4 years old, and the previous one is in my kid's hands (7 years old at this point, and going to be replaced by my current one because lack of future updates on that one).


> Remove the need to upgrade

There's no need to upgrade yearly. They can release whenever they want, but I'm only upgrading every few years at most.


Yeah Apple has no leg to stand on in that regard. Making iOS slower for older phones instead of debloating iOS is anti-sustainable.


The Apple XS just got a new OS version. The Pixel 3 from the same year doesn't even get security updates anymore.

At the end of the day despite what the Lineage OS flashers will insist, lay-people want updated features. Give them updated features for their old phones and they will keep them longer. Keeping them longer is the most you can do when throwaway tech is the norm.

If you need empirical evidence, 5 seconds looking at any market for used phones reflects that.


Money?


I take it you're exclusively an apple iphone/airpod pro interpersonal relationship person.

No kids with android. No wife, husband, whatever with android. No friends with android. Just that trusty ole iphone and airpods.

Plus no iPad, no laptops with USB-C chargers, no battery banks with USB-C inputs/outputs, no other ereaders with usb-c, no USB-C desk fan, no USB-C headphones...

Must be interesting.


Like they said, they keep a Lightning cable in their car. They didn't say that it's a USB-A to Lightning cable. I own USB-C power banks and so forth, and use a USB-C to Lightning cable to plug my iPhone into them.

> no USB-C desk fan

Are you, uh, powering a desk fan off your phone?


In a pinch you grab what you can, and if what you need is USB-C, you have a lot more options.


>Are you, uh, powering a desk fan off your phone?

No, but I power it with a USB-C cable.

And with a phone having USB-C I can just use the same to power my phone when I want. Or power my phone from a powerbank without another cable specially made for this brand of phone.


> take it you're exclusively an apple iphone/airpod pro interpersonal relationship person.

He could just be single...


Your fed link says it is an estimate.

I don't know where this guy gets data but I don't think the availability is that far off from the peak. https://twitter.com/Econimica/status/1698596620022186428

You're talking less than 2 percentage points with the fed estimate anyway right?


>[MPL's] only limitation is that if you modify the source code of Terraform itself, you have to release those modifications under the same MPL license.

Is it possible past releases of terraform having the MPL license means the creator cannot release it under a commercial license? e.g. by using the license you can't escape it for future versions and make it commercial, or does that not apply to the creator/owner.

I know what i'm hoping for is a pipe dream but once you start an open source project using an open source agreement for use it really shouldn't be possible to make it closed source. Open source means anyone could contribute and profiting on someone else's contributions feels very, very wrong.


Copyright holder can always switch to any license they want. Difficulties only emerge if there are multiple copyright holders because contributors weren’t required to assign copyright. Workarounds include rewriting or removing the code owned by someone else.

Previously released versions cannot have their licenses rescinded unless there is clause allowing it.


FTFA: >The $90 million[2] construction of the Atlanta Public Safety Training Center began in spring 2023 on a 85-acre plot of land in the South River Forest, DeKalb County, Georgia....

>...Once complete, the center is planned to be used as a training center for police and fire services and is expected to open at the end of 2023....

>265 surrounding acres of the site will be used as green-space.[6]

265 acres of green space sounds good?


It's a forest now so cops-only green space is a downgrade in a couple ways.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: