Hacker News new | past | comments | ask | show | jobs | submit | ehutch79's comments login

I regret clicking this link without Adblock. Popover videos, mailing list signups, giant ads taking most of my verticle space, ugh.

Does anyone a cleaner article?


Look at the title above. It only says Eza and ls. It says eza is maintained, which tells me the other,ls, is not maintained.

Doesn’t mention exa.

Yes it’s missing nuance if you don’t click through, but that’s a complete statement, and I wouldn’t expect people to click through to get more context


It's saying "modern, maintained", implying that ls is either not maintained (wrong), or ls is not modern (can be argued to be true). Only one of those two properties need to hold for the entire label to fit.


I agree with you that their usage of language is ambiguous and should be clear. I was just explaining the situation not defending the description.


Yes title is not clear so parent was just clarifying.


Side note: the English language is a dumpster fire, and it’s easy for these issues to happen.


I strongly suspect the title could be translated into numerous languages verbatim, without losing the unintended interpretation. It is so for a few languages I know.

You can try it with translation tools.

The problem is semantic: in any language whatsoever (I suspect) if we express the idea that X is a replacement for Y, and in the same sentence mention some attributes of X, it means that those attributes are relevant to qualifying X as a replacement, which implies that those attributes are lacking or inadequately present in Y.

Without heaps of prior context, it is an impossible interpretation that the X attributes are not actually lacking in Y, but in a previously attempted replacement Z.


Absolutely nothing in the confusion here is specific to English.

I don't know why you would try to take this opportunity to criticize English when this misunderstanding could be present in literally every other language.

Because there is nothing whatsoever here that is a case of linguistic confusion or vagueness -- it is a conceptual issue of comparing two items, applying an adjective to one, and leaving the reader to wonder what that implies about the other item.

And no, English is not a "dumpster fire". Every language has its pros and cons. But there is no language on Earth that is a "dumpster fire". There is absolutely nothing productive or good that can come out of blanket, utterly unfounded statements like that.


Correction: the English language is a dumpster.


Don’t comment based on just the title, please.


I clicked through the title, but mostly looked at code and example invocations and output.


Also, it does actually work on MacOS despite this. We’ve had it catch someone getting malware.


To be honest, this makes a lot of sense. The time saved in support is probably worth way more that costs of dealing with any security fallout


For that I think having a remote "reset password" option is more sensible. It would avoid issues coming from password reuse.


…and help the customer reconnect all devices on the WiFi?


Yes. It would be the same as resetting your email password and needing to login again on your devices.

If a password is so precious that you share it plaintext with third parties it is a bad usecase for a password.


The level of effort and obviousness of an email reset is nothing compared to helping someone figure out how to reconfigure every smart device ever made.


So it's a bad usecase for a password, then. Perhaps every router should ship with a preconfigured VLAN for shitty smart home stuff that is a lot more open, or maybe we should stop trying to stick internet into everything ever created.


Why should it be just the IoT devices that get the insecure network? Why not just stop trusting the LAN altogether and instead use technologies like HTTPS and DoH to ensure privacy on the important devices? That seems to be the way the tide is turning anyway.


Personally I'm all for that but people & packages seem to be pretty promiscuous about listen address defaults and assuming everything behind a routers NAT is trusted.

Treating the network as untrusted is good but as long as some people are paying for service, traffic and bandwidth there are reasons to not allow anything to use your network. And there is also a legal question of liability if someone is not quite above board from your IP.


Tell me you've never done help desk work without telling me you've never done help desk work.


I've actually worked help desk for about 3 years.

I've had calls lasting over an hour helping customers configure their email on their phone and computer.

I learned not to laugh when people called "the internet" either "that e-thingy", "mozarella foxfire" or "googlé charome".

I dealt with explaining to people why IE6 did not understand SNI when we decided to give all our customers websites HTTPS.

Just saying that I've been in that and seen that.


They can change it back after logging in if they insist.


they forgot the password, so they can't


Right, good point. There is of course the option to see saved wifi passwords on most devices... but I can see how an engineer decided to bypass all this bikeshedding and just send the damn password haha.


There's always the reset to factory defaults button. The vast majority of WiFi users have never adjust any of the settings anyways.


Verizon does not get to decide what's an appropriate tradeoff for other people's security.


For Verizon owned routers? For company owned and supported equipment, I can understand it. I might not like it, but I can understand it. Especially if they are on the hook for support.

But, that’s why I run my own router for internet access. It’s my router and I can control what it does. If it goes down, then that’s on me. And I’m okay with that. Would I necessarily want the same setup for my parents? Probably not…


Do the own they rest of the equipment on the network that they're putting at risk?


I'm not concerned with this question as it implies that people haven't got a choice between "rent modem, ez for noobs" and "buy own equipment, fully control it." They do have that choice still, it must be some leftover regulation (from back when the US did that) in the case of cable companies, but I have zero problem with the ISP making those tradeoffs. The people who would trust the ISP-owned device likely have already typed that wi-fi password into things like $99 smart TVs which probably transmit their wifi password, location, and microphone data directly to China. Verizon having the wifi password is not cause for concern here.

Those who are security conscious enough to have concerns about their LAN security do not buy "internet + routers + desktop support as a service" by renting the endpoint equipment -- they buy just the internet connectivity and furnish equipment they can control and trust.


> I'm not concerned with this question as it implies that people haven't got a choice between "rent modem, ez for noobs" and "buy own equipment, fully control it."

If you buy the equipment from Verizon, I will bet you a significant amount of money that it still sends your passwords to them [on edit: with exactly zero disclosure that's detectable to 99.99 percent of users]. In fact, I'll bet you Verizon treats customer-owned equipment exactly like rented equipment except in billing. But anyway.

> The people who would trust the ISP-owned device likely have already typed that wi-fi password into things like $99 smart TVs which probably transmit their wifi password, location, and microphone data directly to China. Verizon having the wifi password is not cause for concern here.

You park your car in bad neighborhoods. Had I not stolen your car, somebody else would have done it.


OK, I forgot we're talking about FiOS here. For sure that is slightly weirder than DOCSIS (which is all I've ever known personally). Since it's not really a standard like DOCSIS you probably "must have" some piece of Verizon-proprietary gear whether rented or otherwise and I'm sure Verizon remote-manages those in the same basic ways like you said. But I am pretty sure that still, security-conscious or advanced users can disable the Verizon device's WiFi and drop it into bridge mode and provide their own router and APs. To me this provides a way to opt out of this that is well within the capabilities of anyone sophisticated enough to understand the risks.


A good argument why the fines for this kind of behavior need to be orders of magnitude higher.


I just looked at the code. It's mostly just handling events and conditions a driver would need to handle just to be a functional driver.

I don't think you could write any driver in 50 lines of code?


The Primeagen is behind this, and they had physical samples at react whatever in miami recently for whatever that's worth


Their site isn't going down from an attack. It's running out of resources for normal operations. It's not even a large amount of traffic. not really.


None of that makes sense.

What does getting a certificate through let's encrypt have to do with the server getting overwhelemed?

It's a thing that happens once to renew a cert every couple months.

The performance hit of https on a modern server is negligable. The performance hit on a watch is negligable.


It is not the overhead of the TLS bit but the chain of trust. Putting a caching proxy in the middle of the system is breaking that trust chain. You can do proxy with TLS but it is tricky to do to maintain that chain of trust. A proxy would be a classic way to mitigate (not eliminate) that sort of issue. Either on the serving side or client side. Basically lower the data being returned but not the calls from the clients themselves (this is a DDoS issue). TLS just makes it harder to do proxy.


I still have no idea what you're talking about.

Cloudflare is either passing traffic through without touching it, or as a proxy is doing tls termination, as they're a trusted CA in most devices/browsers/OSs/etc.

None of this really has anything to do with that's happening with OP.


15k people, hitting a site that isn't actively changing content, shouldn't be bringing any webserver to it's knees.

- They've set the generated page to be uncachable with max-age=0 (https://developers.cloudflare.com/cache/concepts/default-cac...)

- nginx is clearly not caching dynamic resources (currently 22s (not ms) to respond)

- lots of 3rd party assets loading. Why are you loading stripe before I'm giving you money?

- why is there a random 'grey.webp' loading from this michelmeyer.whatever domain?

This isn't mastadon or cloudflare, it's a skill issue.


It's not the 15k followers of ItsFOSS who are generating traffic, it is the servers of those 15k accounts own followers who see shares, boosts, or re-shares of content. Given that Mastodon, and the larger Fediverse of which it itself is only a part (though a large share of total activity), are, well, federated (as TFA notes pointedly), sharing links requires each instance to make a request.

I'm not particularly clear on Fediverse and Mastodon internals, but my understanding is that an image preview request is only generated once on a per server basis, regardless of how many local members see that link. But, despite some technical work at this, I don't believe there's yet a widely-implemented way of caching and forwarding such previews (which raises its own issues for authenticity and possible hostile manipulation) amongst instances. (There's a long history of caching proxy systems, with Squid being among the best known and most venerable.) Otherwise, I understand that preview requests are now staggered and triggered on demand (when toots are viewed rather than when created) which should mitigate some, but not all, of the issue.

The phenomenon is known as a Mastodon Stampede, analagous to what was once called the Slashdot Effect.

There's at least one open github issue, #4486, dating to 2017:

<https://github.com/mastodon/mastodon/issues/4486>

Some discussion from 2022:

<https://www.netscout.com/blog/mastodon-stampede>

And jwz, whose love of HN knows no bounds, discusses it as well. Raw text link for the usual reasons, copy & paste to view without his usual love image (see: <https://news.ycombinator.com/item?id=13342590>).

   https://www.jwz.org/blog/2022/11/mastodon-stampede/


Someone asked them in their comments section:

> Have you considered switching to a Static Site Generator? Write your posts in Markdown, push the change to your Git repo, have GitHub/GitLab automatically republish your website upon push, and end up with a very cacheable website that can be served from any simple Nginx/Apache. In theory this scales a lot better than a CMS-driven website.

Admins Response:

> That would be too much of a hassle. A proper CMS allows us to focus on writing.


Switching to a different backend seems overly dramatic. But configuring some caching seems like a pretty obvious improvement.


and i agree with them. static site generators are nice for people who like that kind of workflow, but when you have a team with different personal preferences you need something that is easy to access and doesn't require much learning to get used to.

what would be ideal is a CMS that can separate content editing and serving it. iaw a kind of static site generator that is built into a CMS and can push updates to the static site as they happen.


A quick google search turns up several tools to generate static sites from a ghost instance.


It’s been a long time since I’ve been in the CMS space but I thought Wordpress had dozens of plugins for caching even 10 years ago.

I mean, even just hopping over to host your site in Wordpress.com was a viable option if you were in that middle ground between personal blog and having a dedicated server admin to handle your traffic

Hard to believe that you’d be in the business of serving content in 2024 and have to deal with the slashdot effect from 1999 for your blog of articles and images.


WordPress does have bunches of caching plugins. Some VPS hosts (shout out SiteGround) also have multiple layers of server-level caching that they will apply automatically.

You can take most WordPress websites from multi-second load times to 750ms or less (in fact as a regular exercise I set up fresh WordPress installs on dirt-cheap VPS hosts and see how low I can get them while still having a good site. 250ms to display is not uncommon even without CDNs)


true, but i suspect that wanting to self-host the content is a factor (at least it would be for me). and as others have mentioned, there seem to be issues with the cloudflare setup that should have helped here too, so even with self-hosting, it should be possible to handle this.


This is already possible using a headless CMS


can you elaborate on this? my understanding was that headless CMS means that it has no frontend at all, and you build your own using whatever web dev tools you like. iaw it is used to integrate CMS functionality into my website without hosting the full website inside the CMS.


It shouldn't even be 15k people, unless every person is on a different instance, which is wildly unlikely.


Maybe it's because it's deemed "too difficult" to change it?

Years ago, I was looking into some very popular c++ library and wanted to download the archive (a tar.gz or .zip, can't remember). At that time, they hosted it on sourceforge for download.

I was looking for a checksum (md5, sha1 or sha256) and found a mail in their mailing list archive where someone asked for providing said checksums on their page.

The answer? It's too complicated to put creating checksums into the process and sourceforge is safe enough. (paraphrased, but that was the gist of the answer)

That said, since quite some years they provide checksums for their source archives, but they kinda lost me with that answers years ago.


> - why is there a random 'grey.webp' loading from this michelmeyer.whatever domain?

This got me wondering, and the reason is that they embed a "card" for a link to a similar blog post on michaelmeyer.com (and grey.webp is the imagine in the card). There's a little irony there, I think.


Just as a reminder: the site sees plenty of traffic from various other platforms. It's quite popular, I'm sure, Mastodon is one of the least of their concerns for traffic. If they can handle traffic load on many other various viral/popular traffic, the server is capable enough (even without proper caching).

Not every site configuration is perfect, and blaming the site's configuration, and ignoring Mastodon's inherent issue is borderline not practical.


They went straight for blaming mastodon, without seeming to try and fix or mitigate the issue on their end.

Hard to be sympathetic.

This isn't a new issue, nor is it unique to mastodon. Reducing server load for sites like this is a very common exercise for many reasons.


That sounds like victim blaming.

There have been other cases, such as where a mobile app developer has hardcoded an image from someone else's website into their app, then millions of users request it every time they open the app. Or where middlebox manufacturers have hardcoded an IP address to use for NTP.

Sure, having efficient and well-cached infrastructure setup is good, but there's only so much you can do to "reduce server load" where other people in control of widely-deployed software have made choices that causes millions of devices around the world to hammer _you_ specifically.

The people who made those choices don't give a shit, it's not _their_ infrastructure they fucked over. That's why you need to shame them into fixing their botnet and/or block their botnet with extreme prejudice.

Mastodon's link preview service is a botnet, Mastodon knows it, and they refuse to fix it.


Ahem, mr skill issue, some information, 15k followers != 15k hits.


Each of those caches the preview/etc. so they hit it once in a while, and that works for all their users. Each mastadon server is independent and can't share that preview cache.

All the issues sound more like a them problem than mastadon or cloudflare.

Something's wrong with their setup. Hard to tell as now HN has brought them to their knees


Cloudflare being unable to handle 100MB of traffic over 5 minutes sounds like itsfoss failed to set something up properly. The link preview is a fraction of the total page that would have to be loaded for each individual user no matter where they came from.

ActivityPub itself could more intelligently cache or display the content but something doesn’t add up if Cloudflare can’t handle that kind of traffic let alone Hacker News traffic.


They've set their headers to prevent caching by cloudflare. max-age=0

The images arn't the problem, it's their dynamicly generated page, even if the content is pretty static.


I believe cloudflare is just letting their requests go through and their webserver can't handle it?


vbezhenar: They've misconfigured something and the page itself is marked max-age=0 which disables cloudflare caching. I'm also betting nginx isn't caching either.


What's the point of using Cloudflare without caching?


Cloudflare has some other features like a Web Application Firewall (WAF) and bot protection/filtering (which seems like it would solve their problems too?)

From the article:

> Presently, we use Cloudflare as our CDN or WAF, as it is a widely adopted solution.

To me, it sounds like the author isn't really familiar with the difference between a CDN and WAF is; or familiar with Cloudflare beyond it being a popular thing they should probably have for that matter.


> Something's wrong with their setup. Hard to tell as now HN has brought them to their knees

Agreed, and hardly surprising is it that if the site can't cope with link preview traffic that non-trivial page views would be trouble too!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: