The only thing a blockchain is good for is achieving decentralized consensus on what value a key points to, which is what DNS is.
An alternative way of looking at this is that acquiring domains must be somewhat expensive by definition; either you enforce it at the system level, or you make it free, but then somebody will inevitably grab all the interesting ones and re-sell them to others. A blockchain is the only way to make decentralized financial infrastructure viable.
GNS is the obvious response here, in addition to the various blockchain based solutions. Nothing that enjoys widespread support or mindshare unfortunately.
Even the current centralized ICANN flavor could be substantially more resilient if it instead handed out key fingerprints and semi-permanent addresses when queried. That way it would only ever need to be used as a fallback when the previously queried information failed to resolve.
GP said it was a risk (and it is), not that there are better alternatives. Not all risks can be eliminated easily but you should still be aware of them.
BGP, but the names in question are limited to 128 bits, of which at most 48 will be looked up, and you don't get to choose which 48 bits are assigned to you.
I don’t know why you were modded down because this is mostly true. They are still prohibited from operating in the US but it appears that regulators have no appetite to enforce the law.
Yes, the software that piles up literally is the tech debt. Every automation and tool that was vibe-coded has to be maintained as well. If software is 100x easier to write and you write 100x as much of it, then taking into account network effects, your tech debt is now 100x worse. Congrats!
It’s annoying for the team members I suppose, but to be fair, if you’re working on a high-profile open source project, owned by one of the most hyped companies in the world, and your branches are public, it’s probably a good idea to be clear in the branch naming and supplemental files if you’re just “experimenting”.
By working in public on a popular open source project, you are communicating intent and purpose to your users and the general public through your commit messages, branch names, and documentation. You’ll save yourself a lot of grief if you act accordingly.
The rain doesn’t happen directly above where it evaporates. And “slightly warmer” waste water can have major ecological impacts, destroying native life in the lakes and rivers where the wastewater is ejected. Plus, if the water is taken away from underground aquifers that may not be refilling fast enough, or if it’s taking water from downstream users, that’s something to be concerned with.
So tired of these articles. Yes, it’s possible for them to use very little water. But naive comparisons to non-potable agricultural or other irrigation use or comparisons that don’t take into account growth rates of specific uses or local bottlenecks are useless.
You're mistaken - ghost is not a service consuming actions for itself - it's a CLI tool you run locally to drive workflows with sane default configs so you can easily drop into them and continue working or debugging in reliable and consistent infra, or have your agent do it. It is a better CLI for GH workflows (https://news.ycombinator.com/item?id=47982915), now whatever you were imagining.
The reporter made a website explicitly calling out Ubuntu, RedHat, Amazon, and SUSE but didn’t notify them, and you think that’s reasonable? That they might not have known those distributions are downstream from the kernel team?
yes, because 30 days had passed from the time the patch landed in the kernel, as per industry standard.
approximately every security researcher, including the likes of google and other big names you may know, does a 90+30 disclosure, which is what happened here. they do this for good reason, which has been figured out over decades of experience in reporting thousands and thousands of vulnerabilities.
the only security researchers i know of that dont like 90+30 actually argue for shorter timelines (or immediate disclosures).
What do you think went differently in this case versus other high profile vulnerabilities that had binaries already available for major distros? I feel like it often (usually?) works out that major distros have kernel packages incorporating the fixes already available.
Is this just down to luck, a quirk in the timing about when Linus merged the fix versus when the release gets cut?
What is the heuristic for who should get the heads up? Should they notify amazon but not google simply because they named amazon linux in the report? Seems to me the answer to my first question gets messy fast.
I think it’s reasonable to expect folks in the security community who go to the trouble of creating a website detailing security vulnerabilities in specific listed software to pre-notify the security teams of that software. The CopyFail website calls out Ubuntu and Red Hat specifically, but apparently the author of the site did not inform them of the issue?
But even if you think making unethical decisions in personal self interest is something no one should be criticized for, surely the Linux kernel team ought to have some process for notifying the top distributions of an upcoming LPE, just out of practicality.
In what sense do you believe that the reporter did not notify the security team of the relevant software? The vulnerability is in the kernel. Reporter responsibly disclosed using the kernel’s security report mechanism and waited until a patch was ready.
Distros are downstream of kernel, that doesn’t entitle them to expect to be contacted directly by every security reporter. That’s not on them. Distros that are big enough should be plugged into the linux security team for notifications.
Security researchers cannot be held responsible for broken lines of communication within the org charts of projects that they study. They’re providing a valuable public service already, how much more do you want?
It is suggested that they out of an abundance of caution and 5 or 6 emails. If this is entirely to much to expect we can always help them by mandating that they spend 6 figures annually meeting a much more robust set of requirements that will include notifying all possible affected parties down to Hannah Montana Linux devs if any still exist.
Any strategy that assumes that the rest of the world is functional or makes you personally responsible for fixing all of it is equally broken but there is a reasonable middle ground and sending a few more emails lies within it
> we can always help them by mandating that they spend 6 figures
Who’s we? Mandate with what authority?
AWS and GCP are downstream another level. Should the reporter also have worked with them? And their customers? And the customers of their customers?
IMO this whole discussion seems like people are annoyed by the security researchers doing god’s work and wish they didn’t exist or think that they should be fully subservient to the projects and companies they are helping for free. The bugs were there before the researchers revealed them!!
reply