It's saying "modern, maintained", implying that ls is either not maintained (wrong), or ls is not modern (can be argued to be true). Only one of those two properties need to hold for the entire label to fit.
I strongly suspect the title could be translated into numerous languages verbatim, without losing the unintended interpretation. It is so for a few languages I know.
You can try it with translation tools.
The problem is semantic: in any language whatsoever (I suspect) if we express the idea that X is a replacement for Y, and in the same sentence mention some attributes of X, it means that those attributes are relevant to qualifying X as a replacement, which implies that those attributes are lacking or inadequately present in Y.
Without heaps of prior context, it is an impossible interpretation that the X attributes are not actually lacking in Y, but in a previously attempted replacement Z.
Absolutely nothing in the confusion here is specific to English.
I don't know why you would try to take this opportunity to criticize English when this misunderstanding could be present in literally every other language.
Because there is nothing whatsoever here that is a case of linguistic confusion or vagueness -- it is a conceptual issue of comparing two items, applying an adjective to one, and leaving the reader to wonder what that implies about the other item.
And no, English is not a "dumpster fire". Every language has its pros and cons. But there is no language on Earth that is a "dumpster fire". There is absolutely nothing productive or good that can come out of blanket, utterly unfounded statements like that.
The level of effort and obviousness of an email reset is nothing compared to helping someone figure out how to reconfigure every smart device ever made.
So it's a bad usecase for a password, then. Perhaps every router should ship with a preconfigured VLAN for shitty smart home stuff that is a lot more open, or maybe we should stop trying to stick internet into everything ever created.
Why should it be just the IoT devices that get the insecure network? Why not just stop trusting the LAN altogether and instead use technologies like HTTPS and DoH to ensure privacy on the important devices? That seems to be the way the tide is turning anyway.
Personally I'm all for that but people & packages seem to be pretty promiscuous about listen address defaults and assuming everything behind a routers NAT is trusted.
Treating the network as untrusted is good but as long as some people are paying for service, traffic and bandwidth there are reasons to not allow anything to use your network. And there is also a legal question of liability if someone is not quite above board from your IP.
Right, good point. There is of course the option to see saved wifi passwords on most devices... but I can see how an engineer decided to bypass all this bikeshedding and just send the damn password haha.
For Verizon owned routers? For company owned and supported equipment, I can understand it. I might not like it, but I can understand it. Especially if they are on the hook for support.
But, that’s why I run my own router for internet access. It’s my router and I can control what it does. If it goes down, then that’s on me. And I’m okay with that. Would I necessarily want the same setup for my parents? Probably not…
I'm not concerned with this question as it implies that people haven't got a choice between "rent modem, ez for noobs" and "buy own equipment, fully control it." They do have that choice still, it must be some leftover regulation (from back when the US did that) in the case of cable companies, but I have zero problem with the ISP making those tradeoffs. The people who would trust the ISP-owned device likely have already typed that wi-fi password into things like $99 smart TVs which probably transmit their wifi password, location, and microphone data directly to China. Verizon having the wifi password is not cause for concern here.
Those who are security conscious enough to have concerns about their LAN security do not buy "internet + routers + desktop support as a service" by renting the endpoint equipment -- they buy just the internet connectivity and furnish equipment they can control and trust.
> I'm not concerned with this question as it implies that people haven't got a choice between "rent modem, ez for noobs" and "buy own equipment, fully control it."
If you buy the equipment from Verizon, I will bet you a significant amount of money that it still sends your passwords to them [on edit: with exactly zero disclosure that's detectable to 99.99 percent of users]. In fact, I'll bet you Verizon treats customer-owned equipment exactly like rented equipment except in billing. But anyway.
> The people who would trust the ISP-owned device likely have already typed that wi-fi password into things like $99 smart TVs which probably transmit their wifi password, location, and microphone data directly to China. Verizon having the wifi password is not cause for concern here.
You park your car in bad neighborhoods. Had I not stolen your car, somebody else would have done it.
OK, I forgot we're talking about FiOS here. For sure that is slightly weirder than DOCSIS (which is all I've ever known personally). Since it's not really a standard like DOCSIS you probably "must have" some piece of Verizon-proprietary gear whether rented or otherwise and I'm sure Verizon remote-manages those in the same basic ways like you said. But I am pretty sure that still, security-conscious or advanced users can disable the Verizon device's WiFi and drop it into bridge mode and provide their own router and APs. To me this provides a way to opt out of this that is well within the capabilities of anyone sophisticated enough to understand the risks.
It is not the overhead of the TLS bit but the chain of trust. Putting a caching proxy in the middle of the system is breaking that trust chain. You can do proxy with TLS but it is tricky to do to maintain that chain of trust. A proxy would be a classic way to mitigate (not eliminate) that sort of issue. Either on the serving side or client side. Basically lower the data being returned but not the calls from the clients themselves (this is a DDoS issue). TLS just makes it harder to do proxy.
Cloudflare is either passing traffic through without touching it, or as a proxy is doing tls termination, as they're a trusted CA in most devices/browsers/OSs/etc.
None of this really has anything to do with that's happening with OP.
It's not the 15k followers of ItsFOSS who are generating traffic, it is the servers of those 15k accounts own followers who see shares, boosts, or re-shares of content. Given that Mastodon, and the larger Fediverse of which it itself is only a part (though a large share of total activity), are, well, federated (as TFA notes pointedly), sharing links requires each instance to make a request.
I'm not particularly clear on Fediverse and Mastodon internals, but my understanding is that an image preview request is only generated once on a per server basis, regardless of how many local members see that link. But, despite some technical work at this, I don't believe there's yet a widely-implemented way of caching and forwarding such previews (which raises its own issues for authenticity and possible hostile manipulation) amongst instances. (There's a long history of caching proxy systems, with Squid being among the best known and most venerable.) Otherwise, I understand that preview requests are now staggered and triggered on demand (when toots are viewed rather than when created) which should mitigate some, but not all, of the issue.
The phenomenon is known as a Mastodon Stampede, analagous to what was once called the Slashdot Effect.
There's at least one open github issue, #4486, dating to 2017:
And jwz, whose love of HN knows no bounds, discusses it as well. Raw text link for the usual reasons, copy & paste to view without his usual love image (see: <https://news.ycombinator.com/item?id=13342590>).
> Have you considered switching to a Static Site Generator? Write your posts in Markdown, push the change to your Git repo, have GitHub/GitLab automatically republish your website upon push, and end up with a very cacheable website that can be served from any simple Nginx/Apache. In theory this scales a lot better than a CMS-driven website.
Admins Response:
> That would be too much of a hassle. A proper CMS allows us to focus on writing.
and i agree with them. static site generators are nice for people who like that kind of workflow, but when you have a team with different personal preferences you need something that is easy to access and doesn't require much learning to get used to.
what would be ideal is a CMS that can separate content editing and serving it. iaw a kind of static site generator that is built into a CMS and can push updates to the static site as they happen.
It’s been a long time since I’ve been in the CMS space but I thought Wordpress had dozens of plugins for caching even 10 years ago.
I mean, even just hopping over to host your site in Wordpress.com was a viable option if you were in that middle ground between personal blog and having a dedicated server admin to handle your traffic
Hard to believe that you’d be in the business of serving content in 2024 and have to deal with the slashdot effect from 1999 for your blog of articles and images.
WordPress does have bunches of caching plugins. Some VPS hosts (shout out SiteGround) also have multiple layers of server-level caching that they will apply automatically.
You can take most WordPress websites from multi-second load times to 750ms or less (in fact as a regular exercise I set up fresh WordPress installs on dirt-cheap VPS hosts and see how low I can get them while still having a good site. 250ms to display is not uncommon even without CDNs)
true, but i suspect that wanting to self-host the content is a factor (at least it would be for me). and as others have mentioned, there seem to be issues with the cloudflare setup that should have helped here too, so even with self-hosting, it should be possible to handle this.
can you elaborate on this? my understanding was that headless CMS means that it has no frontend at all, and you build your own using whatever web dev tools you like. iaw it is used to integrate CMS functionality into my website without hosting the full website inside the CMS.
Maybe it's because it's deemed "too difficult" to change it?
Years ago, I was looking into some very popular c++ library and wanted to download the archive (a tar.gz or .zip, can't remember). At that time, they hosted it on sourceforge for download.
I was looking for a checksum (md5, sha1 or sha256) and found a mail in their mailing list archive where someone asked for providing said checksums on their page.
The answer? It's too complicated to put creating checksums into the process and sourceforge is safe enough. (paraphrased, but that was the gist of the answer)
That said, since quite some years they provide checksums for their source archives, but they kinda lost me with that answers years ago.
> - why is there a random 'grey.webp' loading from this michelmeyer.whatever domain?
This got me wondering, and the reason is that they embed a "card" for a link to a similar blog post on michaelmeyer.com (and grey.webp is the imagine in the card). There's a little irony there, I think.
Just as a reminder: the site sees plenty of traffic from various other platforms. It's quite popular, I'm sure, Mastodon is one of the least of their concerns for traffic. If they can handle traffic load on many other various viral/popular traffic, the server is capable enough (even without proper caching).
Not every site configuration is perfect, and blaming the site's configuration, and ignoring Mastodon's inherent issue is borderline not practical.
There have been other cases, such as where a mobile app developer has hardcoded an image from someone else's website into their app, then millions of users request it every time they open the app. Or where middlebox manufacturers have hardcoded an IP address to use for NTP.
Sure, having efficient and well-cached infrastructure setup is good, but there's only so much you can do to "reduce server load" where other people in control of widely-deployed software have made choices that causes millions of devices around the world to hammer _you_ specifically.
The people who made those choices don't give a shit, it's not _their_ infrastructure they fucked over. That's why you need to shame them into fixing their botnet and/or block their botnet with extreme prejudice.
Mastodon's link preview service is a botnet, Mastodon knows it, and they refuse to fix it.
Each of those caches the preview/etc. so they hit it once in a while, and that works for all their users. Each mastadon server is independent and can't share that preview cache.
All the issues sound more like a them problem than mastadon or cloudflare.
Something's wrong with their setup. Hard to tell as now HN has brought them to their knees
Cloudflare being unable to handle 100MB of traffic over 5 minutes sounds like itsfoss failed to set something up properly. The link preview is a fraction of the total page that would have to be loaded for each individual user no matter where they came from.
ActivityPub itself could more intelligently cache or display the content but something doesn’t add up if Cloudflare can’t handle that kind of traffic let alone Hacker News traffic.
vbezhenar: They've misconfigured something and the page itself is marked max-age=0 which disables cloudflare caching. I'm also betting nginx isn't caching either.
Cloudflare has some other features like a Web Application Firewall (WAF) and bot protection/filtering (which seems like it would solve their problems too?)
From the article:
> Presently, we use Cloudflare as our CDN or WAF, as it is a widely adopted solution.
To me, it sounds like the author isn't really familiar with the difference between a CDN and WAF is; or familiar with Cloudflare beyond it being a popular thing they should probably have for that matter.
Does anyone a cleaner article?