Hacker News new | past | comments | ask | show | jobs | submit login
I won free load testing (fasterthanli.me)
461 points by 0xedb on May 2, 2022 | hide | past | favorite | 134 comments



> As for fly.io, well, I work there, so, they pay me.

Well, that’s nice and all, but if a fly.io customer were attacked with 3.1GB/s throughput, according to the lowest outbound bandwidth price of $0.02/GB [1] they’d be burning at least $3.72/min. 6 times that if attacked from India. That would be a lot less fun.

[1] https://fly.io/docs/about/pricing/

Edit: They mentioned they waive charges as a result of attacks: https://community.fly.io/t/about-rate-limiting/156/4


We do waive attack related charges. We're not interested in making money on bandwidth. We have a blog post with a few more details here: https://fly.io/blog/we-cut-bandwidth-prices-go-nuts/

That said, bandwidth does cost money. There are three ways we could handle it:

1. Charge as little as we can get away with, be transparent about it, eat the cost when someone has a negative experience.

2. Charge for bandwidth capacity, but don't meter it (ie: give VMs unmetered 100mb interfaces).

2. Don't charge for it, call it unlimited, put a hidden cap in place, and restrict what kinds of apps can run on the platform.

We opted for #1. I could give you a lot of post hoc reasoning, but the reality is that it's what I'd prefer as a customer. Companies that promise "unlimited bandwidth" feel a little slimy to me. Amos wouldn't be able to run his video hosting on one of those platforms.

Unmetered interfaces with restricted throughput do seem pretty nice. I've used a bunch of services like that. The cost to get started is high, though. And in my experience, the quality is poor. I've never had worse network performance than when I was paying for an unmetered network connection. Which makes sense, because these people attract all the users who want cheap, unmetered bandwidth. And they can't really afford to build enough upstream network capacity to handle all of them.

I don't love surprise expenses any more than you do. I do think we picked the least bad option, though, even though it puts some people off.


Indeed. fly.io encouraged me to move projects there, but since I budget my side projects as if they were an actual business, parts of the pricing page turned me off.

For the time being, I've decided that getting insights into "how well the platform worked for me" was more valuable for me, so my video platform is staying there, but I've initiated multiple discussions about pricing and I intend to keep doing so until I'm happy with the answer.


Cool, thanks for pushing this discussion from the inside.


[flagged]


If your goal was to get us to spend more money on bandwidth, you succeeded. Well done. We documented how to do this quite a while ago, though: https://fly.io/blog/we-cut-bandwidth-prices-go-nuts/


"What we've done instead is set a blended price that fits most apps running on Fly.io, and decided to just eat the extra cost from outliers. If you want to exploit that, run an app in Sydney with a whole bunch of users in India. We'll lose money on your app and you will win one round of capitalism."

Impressive foreshadowing.


This sounds like you're claiming responsibility for the attack?


They do so more explicitly downthread.


Do you recall what this user said downthread? They seem to have deleted their other comments.


The posts were flagged, you can see them if you turn on showdead. Here's sammy811's threadstarter:

> I was the one "attacking" the video platform! I saw fly io had insane bandwidth pricing for India, so I spawned a couple of VMs in India to constantly pull the 4k video. Sad the whole ASN got blocked!


Thank you, I'd checked with showdead earlier but for some reason I didn't see the comments on mobile.


OpenResty (Nginx + LuaJIT) can help you limit the damage of unsophisticated DDoS attacks like these. I keep a count of the requests-per-second I'm getting in each nginx worker. I also set a special cookie for every response from the upstream (it could literally be foo=bar). When the RPS goes approve a certain threshold, if the special cookie is not present, I serve a static HTML page (bypassing the upstream) that sets the cookie and reloads the page (Nginx can do 20K+ RPS without breaking a sweat). In my experience, these fly by DDoS attacks never use cookies, so legitimate users can get through, but the bots are blocked.

Of course, if you get hit with something slightly more targeted, this defense is worthless.


Either it's the cost/benefit ratio that been keeping them from handling this, or it simply hadn't crossed their minds until now in which case you might soon need to come up with a new mitigation strategy ;)


Definitely a matter of cost/benefit ratio. DDOS using headless browsers are pretty common.


Mirror in case the attack successfully picks up again: https://web.archive.org/web/20220502013024/https://fastertha...

As a treat, this is a testament to a logic error I made in the caching code (inserted uncachable versions of pages into the cache for a little while). Enjoy!


You have mentioned AS8075 (Microsoft), but with a comment of "Corp MSN"?. It's definitely using Azure (whether compromised or rented is something that I can't ascertain). The MSN here is due to legacy: it stands for MicroSoft Network (back in the days that MS is also an American ISP, although it seems that they're still a dial-up ISP: https://en.wikipedia.org/wiki/MSN_Dial-up).

Also, Contabo Asia (AS141995) is misclassified as in Germany. Although they are German, that AS is exclusively used for their Singaporean operations.


Thanks for the context, I fixed those up. This was my first time looking up AS for additional info and it was a lot of fun.


No problem, unfortunately some AS numbers don't map up to the expected company name either because the company was acquired by a bigger company later or just not bothering to keep the Whois information accurate. I only knew that Contabo is operating in Singapore by chance (and due to IPv4 exhaustion they used their previous German addresses for their subsidiary) and the history of MSN.


I'm spending a lot of time in Rust but I always learn so much new when reading your articles. Particularly, how do you manage find all these interesting crates that you're using. Like arc-swap, or Moka, or color-eyre. Do you actively search for these things or do you have a habit of reading the what's new crates.io feed in the morning with a cup of coffee?


Not the OP, but the r/rust subreddit is my other lurking spot besides HN. (And I’m also currently working on a site to document some of these useful crates)


/r/rust is a great place for that, I also occasionally click through most of the https://lib.rs/ categories, just to see what the names I don't recognize are about.

My twitter timeline also has tons of neat little crates pop up every month, and I have to thank my colleagues at Netlify (back when they were doing Rust) and now at fly, for showing me some of those I showcase in the article.


Is Netlify not doing rust now? What’s the story there?


It happened after I left, I don't think it's my story to tell (and I think they have higher priorities than telling that story).


Yeah, I’ve found the r/rust community very welcoming. The rust folks on stackoverflow much less so.


Regarding the issue of people sending requests directly to your server, bypassing Cloudflare: While you can use IP allowlisting for this, Cloudflare can also be configured to send requests to your server exclusively over mTLS, with a client certificate chaining up to a root CA which they publish. If I were worried about this kind of attack, I'd probably turn that feature on (and then reject requests that don't present a valid client cert); I don't fully trust IP allowlisting. (On the other hand, I can't say for sure that the performance overhead wouldn't be a problem in a DDoS scenario. But it doesn't seem super likely?)


This is a good alternative when security is the priority. Wouldn't verifying an mTLS certs validity be significantly more expensive than whitelisting IP blocks though? Especially when a server is potentially under duress? Can you cache the verification so it wouldn't have to be done each time? All legitimate questions! I don't have the answers but am curious.


mTLS certainly seems like the the most expensive option here (not that expensive outside attacks, though).

S-tier implementations include: firewall rules or a BPF program, or a VPN-based approach (like Cloudflare Tunnel). The way I did it is fine for small-scale attacks like that one, but a large enough attack will have you spend too much time on syscalls and waste valuable kernel resources.

I'd love to read a write-up about how these different approaches perform in practice, because this is largely gut feeling / the popular wisdom that "the sooner you block, the better".


> Wouldn't verifying an mTLS certs validity be significantly more expensive than whitelisting IP blocks though?

I wouldn't expect this to be all that significant if already using TLS. So in the context of only allowing cloudflare to establish connections to the service, and be 100% fronted by cloudflare, mTLS isn't much more expensive than already using TLS, which we all do every day. And my read of the blog post, is most of the expense / optimizations are in the time it takes the server to generate and serve a page.

> Especially when a server is potentially under duress?

Well, it's a tradeoff. The desired property is only cloudflare can make requests to the actual web server, so compared to IP whitelisting there might be some tradeoffs. I think the biggest one is maintenance, IP whitelisting requires staying ontop of any changes from cloudflare. I'm sure cloudflare is good at pre-announcing new IPs, PoPs, etc, but there is a small risk to missing this, especially if it's a set it and forget approach. Although to be fair, mTLS also has this depending on when the root is set to expire.

> Can you cache the verification so it wouldn't have to be done each time?

Sort of, but it depends on some support on both the client and server, so it depends on whether both sides have opted into it. The terms you're looking for to do more research are TLS session resumption, which does involve it's own security concerns.

More effective at this level, is clients will re-use connections. So as long as the client/server support Connection: keep-alive, you do the mTLS once and run hundreds or thousands of queries in the single TLS verification.

My 2 cents is the biggest downside of the mTLS approach is you may still leak the location of the webserver. If the web server is on the same domain or a domain known to be used by the target, some certificate scans may turn it up. IIRC server cert is exchanged before client certificate, so it's possible to leak this without presenting a client certificate. I'd have to double check the spec to be sure though.

Although to be fair, this problem also exists in the specific implementation of whitelisting from the OP, as it appears to be code embedded in the web server to drop the connection if not from an approved source. So someone scanning for certs / servers could still learn about the server from the TLS connection, and do a simple flood the server out of existence type attack, or some more limited resource type attacks on number of active connections, etc. Ideally, with this type of whitelisting you would want to do it in iptables or the host firewall, so that you just blackhole the unapproved IPs and reveal nothing about the existence of the server.


Ran out of edit time, but wanted to add a quick note:

- I didn't look at the presented source code close enough, but the code in the article does appear to update the cloudflare IPs in a loop. So in the presented code it's not an issue to get out of date, but for anyone replicating this, would be something to consider. - Also of note, there is a reason security minded folks don't like using the IP address for identity, as the whitelist likely covers far more than necessary, and is treated with less scrutiny than something like a TLS certificate. For a personal blog though I don't think this is really a concern. But might be a consideration for a company with something to protect.


Or you use Cloudflare Tunnel and don't open Ports 80/443 at all.


Fun!

I tend to be paranoid about exposing things to the Internet, so just put my raw servers behind Envoy. I have tuned that to do rate limiting, circuit breaking (stop sending requests to an upstream when it returns too many errors), idle connection termination, and to shed load when a certain amount of memory is in use, so without any additional configuration for a new service behind the proxy it's somewhat difficult to get the proxy and other services to not respond at all.

I'm guessing that in a real attack, the rate limiting service is a weak link. I use a custom rate limit service to aggregate rate limits across a /24 (and hacked that together in an evening), and that is likely the first thing to blow up and erroneously deny service to legitimate users. (I'm sure I have it set up to fail closed, which will be annoying.)

I had a hard time ever generating enough load to test any of this for the static serving path. I just set up a mirror of my production environment on my workstation, limited the critical services (Envoy + nginx + rate limit + Redis) to some low amount of CPUs, and then had 31 workers generate synthetic load. I was able to get circuit breakers to open to at least prove that that code works, but I somehow think that I'll run out of network bandwidth before I run out of memory to keep track of open streams. Difficult to load test when the upstream can respond to most requests out of memory.

Would be interesting to dig into it more. But for those of you reading this and thinking "I'm going to launch an attack right now", I will just turn off the site if I go over my bandwidth quota. Clone the config repo, host everything locally, run your tests, and send me the results ;)


This inspired me to actually look at my CloudFlare stats, and I realized it's not caching some of my HTML pages even though I had page rules setup to cache everything. I use a static site generator, and it's never been an issue at this point (nobody has any reason to DDoS me). But this is a good motivator to fix that up. I've been procrastinating on fixing a few things on my site, so I'm adding this to the list when I update it. I might also setup the CloudFlare Tunnel at the same time.


Hello, friendly Cloudflare employee here... there's a much simpler alternative to managing cache policy if your whole website is static. We built a tool for deploying static websites to Cloudflare [1]. It watches your Git repo for changes, then rebuilds your site and serves _all_ the assets from our edge servers. No need to configure page rules or cache policy, because there's no more origin. The free tier covers 500 builds per month (at most, you'll do 1 build per commit, so you should be covered just fine).

I helped develop Tunnel over the last ~3 years, and I love it, but it's definitely overkill if you just want to serve some static files on the edge.

(Please delete if this goes against HN policy, I'm trying not to be a shill here, just help tristor avoid spending an hour configuring page rules and cache policies)

[1] https://pages.cloudflare.com/


This is great info. I haven't tried out CloudFlare Pages yet. I may consider it for my next site, but I mostly host my personal site on a server because I have it for other reasons, and the site takes up minimal resources. Plus learning how to manage the caching rules correctly seems like a good skill to pick up :)


It's more fun in my head if "CloudFlare Attack Mode" allows you to wield CloudFlare as a Black-ICE weapon

https://en.wikipedia.org/wiki/Intrusion_Countermeasures_Elec...


I had never considered service back-pressure or circuit breakers until it became a buzz in the java world a few years ago. The concepts are universal though and genuinely interesting building blocks for architecture.

Site Reliability Engineering is a fascinating problem space.


Probably doesn't matter much with only a few networks, but this is using the wrong data structure:

    if let Some(net) = ip_nets.load()
        .iter()
        .find(|net| net.contains(&addr.ip()))

ip_nets is a 'HashSet<IpNet>' but it should be a radix/patricia tree.

Something like https://lib.rs/crates/iprange


The `iprange` library contains two bits of `unsafe` code [0].

[0] https://github.com/sticnarf/iprange-rs/search?q=unsafe


The first one seems to be a necessary workaround for the lack of GATs; its safety comment is pretty trivially correct. The second one is entirely unnecessary, and I've just filed a PR to get rid of it.


I looked for that kind of data structure for 30 seconds in the ipnet crate itself, didn't find it, noticed there were only 23 IP ranges and decided it was fine.

(Keep in mind this happened during the attack, so compromises)


why


Because a radix tree has O(1) lookups even if you have 500,000 ip ranges. It's how routers route.


Out of curiosity, I googled how much a DDoS attack goes for these days. Apparently they can cost as low as $10/hour. I don't know if the shady people will deliver, but that's what the internet says. So apparently it's pretty easy to DDoS anyone and make it difficult to trace back to you.


This is only if you have _no_ idea how to use very basic open-source tools to wreak havoc via some open proxies. The real cost of launching small-scale attacks like this is $5/month on your favorite VPS provider.


For anyone reading this: Alex knows about the attack and has my permission to talk about it publicly.


Open proxies? I heard about this more than a decade ago, but why would anybody in 2022 run an open proxy? Or are these open proxies unintentional, i.e. misconfigured?


Some are misconfigured, but others are just honeypots to sniff out some traffic. https://www.youtube.com/watch?v=0QT4YJn7oVI


Many of them are unintentional: a device sitting on the open internet has some vulnerability and gets exploited. Bad guy sets up a proxy on the device and uses it to click a bunch of ads using bots, or crawl Google results, or launch attacks, etc. Or, as mentioned below, they could simply be the result of misconfiguration.

Some of them are very intentional: https://www.torproject.org/


Misconfigure your squid/privoxy and you’ll probably end up on an open proxy list within hours. Been there.


I imagine your VPS provider will gladly give your info to law enforcement whereas a DDoS company might not.


Unfortunately, this type of abuse is essentially only acted on if you either (a) cause some kind of problem for ops at the VPS company, or (b) the victim tells on you. Even then, the absolute worst thing that will happen to you is your account will be closed. There is just no way on the modern internet to investigate/prosecute these kinds of things. That's why one of our primary goals with the free Cloudflare plan is to invert the problem: make DDoS go away by making it it free to mitigate.


Isn't the problem to find enough of these open proxies to do meaningful damage?


> if you want to skip it, search for "After the storm".

> Yes, yes, I know, I should add anchor links for headers.

This is a rare case where blind people using screen readers have it (a little) easier. Every serious screen reader I know of has a command to skip to the next heading. It's too bad most sighted web users don't have a similar feature handy.



Oh how I miss this... If Vivaldi has this feature, I might have to switch.


That was fun! Showed some caveats to "I have cloudflare so I'm fine."


What a great post-mortem write-up! Thans for sharing


For sites that are mostly read only from non-logged in users Cloudflare can be great. I've got a WordPress content site configured with Cloudflare's APO. Almost 100% of actual human requests for HTML pages or static resources are cached. And there are also a few caching layers at the server level too (setup automatically by Cloudways). The site generates around $15k per month and growing fast on an overprovisioned $50/mo server. I don't think I'll need to spend much more on hosting at even 10x the current traffic due to the caching by Cloudflare. Maybe a bit more for disk space since we store large images but DigitalOcean's Block Storage also makes that incredibly cheap


I switched my site to static and it's become hassle-free to deploy ever since. Just copy the files anywhere and you're done. I assume it can't be DDoSed either, though I don't think nobody has ever tried (or they did and I didn't notice).


By using Cloudflare APO my site is essentially static. The origin can go down and 99% of pages and all the associated resources can still be served.

For our business use case WordPress is a necessity and so switching to a true static solution simple isn't feasible (we acquire and merge other content sites. 99% of content sites being sold are built on WordPress so being in the ecosystem is critical for this reason and many others)


> That lets me answer questions like "what RSS readers (that aren't browsers) is my audience using?"

Very interesting... I also use NNW and FreshRSS


I'm working on two projects that could let a stand-alone Rust web server weather a moderate DDoS attack like the author had:

Beatrice [0] - A web server with built-in connection limits and thread limits. It's async but supports non-async request handlers.

fair-rate-limiter [1] - In theory, one could use this to shed most of the load from DDoS attacking nodes.

[0] https://crates.io/crates/beatrice

[1] https://crates.io/crates/fair-rate-limiter


> The traffic doesn't look like something like headless Chrome was used

Setting aside the fact that headless chrome or other browser testbeds do a good job at hiding their presence, what could be the vector for a botnet infection if this were true? Extensions?


> Setting aside the fact that headless chrome or other browser testbeds do a good job at hiding their presence

It doesn't look like headless chrome, or headed chrome, or any kind of chrome... because there's just requests for one file and no other resources. Chrome would do a lot of other stuff.

This is a wget loop or some other minimal agent with a changed user agent string.


Could be anything.

Could even be just a user with a few beefy machines and a lot of proxies.


Most botnets I've encountered seen seem to consist of enterprise routers and the like, random IoT-stuff. Most of it probably too old and low-powered to even run headless chrome.


> Minutes after I posted this article, the attack resumed. Same shit, different AS.

Noob question: was that caused by the article getting posted to HN? Or was it really an attack?


This guy has some great energy.

And humor:

>> Because it doesn't return an AddrStream but instead a Pin<Box<TimeoutWriter<TimeoutReader<TcpStream>>>>...

>> Gesundheit.


I'm interested in what the fly.io would cost if you didn't work there.

I got a fly app and can't proxy it through cloudflare cause that doesn't work.


I have proxied a fly.io app through cloudflare. The last time I tried, it worked.


Not sure if it's because I'm using a subdomain or not.

But I created a cert, added the a and aaaa records to CF, then after it was verified, turned back on the CF proxy.

However, my app stopped working until I turned off the proxy


A security audit of my small 6 person office found a Digital Ocean machine trying to brute force one of our Windows machines. I'm becoming less and less impressed with Digital Ocean as time goes on.


I snitch-tagged Digital Ocean and other VPS providers involved in the attack, but didn't expect much. They're much bigger than this: what's a big deal for my toy server is barely a blip on their radar.

And realistically, there's only so much they can do about someone running Tor exit nodes / an open proxy on their infra. Everyone in the cloud space has been fighting that off (and miners) for years, it's one arms race among many.


This is not the best solution (fail2ban and 2fa would be better) but https://github.com/skeeto/endlessh is a neat tool if you want to annoy someone with unsophisticated scripts. It’s worth noting that annoying a script kiddie might get you ddosed.


Because were are so small we were able to easily set up allow lists.


My impression from running a service that was attacked is that Digital Ocean is pretty much average here. Free CI providers, cheap VPSes, and compromised boutique hosts all provide a significant amount of traffic. The variance is such that there is no one ASN to block that mitigates any sort of coordinated attack in a meaningful way.

I think what is happening here is that there are lots of free hosts that let you send traffic to websites (in my case, volume didn't matter, just people signing up for free trials to get a little bit of free compute), and there is really no way for cloud providers to reduce the volume in a meaningful way. They are not necessarily serving malicious customers, rather their legitimate customers have gotten hacked and are now the attack vector. Or, their business is hosting, and people using THEIR free trials are using the free trials for abuse. (Consider if you just want a new IP address with which to sign up for some web service how easy it is to use something like the CircleCI "free for open source" plan to do that.)

If I ever started my own cloud provider, one thing I'd want to get under control is a good view of traffic leaving the cloud provider. Probably more than ports + bits per second; actually proxy the HTTPS or whatever. That way, if someone starts abusing other people's stuff, there is at least a point where I can rate limit it ("kill all video downloads to notable Rust personality's website because they asked me to") while hacked customers get their stuff cleaned up. This is a hard problem, balancing security and good Internet citizenship, but something I'd want to spend some time on.

Anyway, TL;DR, DigitalOcean shows up on your radar because they are pretty popular. Lots of Linux VPSes equals lots of insecure Linux VPSes, which is the perfect point for launching another attack. There is only so much the cloud provider can do, but doing more would certainly be nice.


does it make you a freeloader xD


Not to criticize, but how the hell do you write this long posts? Why don't others do it too?


Grew up in a very religious, very "literal interpretation of the texts" home. They had everyone hit the books and even write reports. I guess the habit stuck with me after I left the fold.


I assume you're keeping extensive notes as well, right?


+1. I love Amos' extensive writings.


I was very surprised to see that your article about golang got flagged here on HN. Never realized that go was such a sensitive topic. I assume this is related? Can anyone in the golang community give some context?


I'm begging the comment section here to not touch on this topic today — I'm really not trying to cause a third flamewar, and intentionally didn't mention the name of news aggregators or direct links to the articles themselves in that piece.

I was hoping for the discussion here to be around similar experiences, how DDoS really has become a commodity, how other folks's websites is architected, how various platforms fare against these threats (and what it costs), whether folks were are of Cloudflare's default caching policies, or its DDoS protection.

Anything BUT rehashing the discussions of the past few days: everybody is over that. If you do want to discuss those, you can move that conversation to my subreddit, or my twitter, or heck, send me an e-mail.

(Also, and I've stated this elsewhere many times: the Go community doesn't have to answer for that. This is one individual having well-timed fun, let's please PLEASE leave it at that)


I’m not from any community, but programming languages will always be a sensitive topic. It’s because languages are platforms.

Your productivity in any language is directly proportional to your time and effort investment in it. It's in your best interests to pick the language that's likely to thrive and spend time learning it's ins and outs. On the flip side, betting on a horse that doesn't win could mean the loss of months or years of effort. This is why people evangelize the platforms they're invested in - convincing other people to join improves the health of the platform, increasing their return on investment.

This evangelizing can sometimes become contentious if others perceive it as an attack on their platform. People defend their language mostly because they don't want to see it lose popularity. If it did, their language's viability is threatened and their investment is in jeopardy. It's also partly because they've spent so long on it that it's become a part of their identity.

A person who thinks of themselves as a “Go developer” rather than a “developer” is going to take that article personally.


I think, also, that Go hasn't generally had a lot of criticism (aside from the generics débâcle) in this venue, so the "attack" might be hitting some especially sensitive skin.

As an aside, I very much enjoyed "A half hour to learn Rust" from fasterthanli.me as it got me to actually start writing simple stuff in the language so I'm a little biased in favour of the author, but the critique didn't seem to be particularly harsh. As someone who's primarily a Java dev I'm used to seeing much more biting (and often inaccurate) condemnation of my own preferred tool!

[1] https://fasterthanli.me/articles/a-half-hour-to-learn-rust


There are interesting parallels across a wide range of human behaviour. People get invested in cultures, languages, philosophies, economic models, religions, etc...

E.g.: the "east vs west" fight in Ukraine is a great example. One side is invested in the democratic/capitalistic model, the other in the autocratic/central-planning model. The war isn't just over a patch of land on the border of Europe.

If people are willing to start a shooting war with actual blood, violence, and death over a mere "ideology", then it shouldn't come as a surprise that developers are willing to go to a "war of words" over their favourite language or platform.


Well, if you write a blog post with the title "Lies we tell ourselves to keep using [X]" (where [X] is a reasonably popular programming language), that implies that you think using [X] is a mistake and everyone who is currently using [X] is either clueless, deluded or trapped by "sunk costs". If you then continue with an article that further expands on just how bad [X] is in your opinion and that it has almost no redeeming qualities, that's pretty much the definition of flamebait...


Ok, I get it! I was mostly surprised, but reading dang's comment (https://news.ycombinator.com/item?id=31207191) it starts to make sense.

The dots that I didn't connect was that the post was a reply to the response on HN and since the post was controversial, it (being a HN comment, in a sense) should have been "more thoughtful and substantive, not less".

Perhaps the headline was the worst part of the article? Like you say - "Lies we tell ourselves to keep using [X]" could have just been "Common reasons to use [X], and why I disagree with them".

Thanks for pointing it out!


@dang had a (reasonable sounding) post on the original thread stating that hacker news generally dislikes having meta-commentary of the day prior posting on the front page.

If this topic gets flagged then we’ll know if it was go specific I suppose.

EDIT: Why am I flagged?


The issue isn’t there programming language discussed, it’s the choice of commentary. His submissions have been getting more and more trolling lately and that leads to negative discourse.

We all have our own particular axes to grind but even me, someone who is language agnostic, is getting tired of the tone of those submissions.


Except that, as far as I can tell at least, Amos is not the one posting these articles.

The problem is that he’s writing interesting, informed commentary on topics that are hot button issues for people who are a tad over-sensitive about their career choices :)

You can be a happy Go programmer, whilst also recognising that the language has limitations & things it’s really not good at. That would be the mature, honest response to these articles. What is not a mature, honest reponse is DDOSing the guy because you (the generic you, not you specifically hnlmorg!) don’t like his opinions.


> You can be a happy Go programmer, whilst also recognising that the language has limitations & things it’s really not good at. That would be the mature, honest response to these articles.

The problem is the author preemptively shrugged off those responses by ostensibly accusing those coders of having Stockholm Syndrome.

It was content like that which caused the issues. Regardless of whether it’s eloquent trolling or just a genuine but passionate piece, it was touching on an already hot topic with poor consideration about how it would be received.

And that’s fine for a personal blog. But when you then see your articles explode online and then proceed to write follow up pieces in the same tone and intended for the same audience (regardless of whether he directly submitted it to HN), it’s harder to dismiss as someone not trying to exploit flame wars to booster their own blogs traffic.

I guess they succeeded in that too; albeit a DDoS attack wasn’t quite what they intended.

To be clear, I don’t agree with the DDoS attack. Nor do I believe they deserved it (what they actually deserved was just for the articles to get flagged and forgotten) but I can still blame the author for the arguments on here when they saw the existing discourse and decided to write follow up pieces of equally antagonistic tones.


this:

> I can still blame the author for the arguments on here when they saw the existing discourse and decided to write follow up pieces of equally antagonistic tones.

seems very victim blaming to me.

Amos’ articles are opinionated, well written & amusing rants. If a bunch of immature Go programmers can’t take a spot of criticism directed at their favourite language then that says a lot more about them than it does about anyone else.


> seems very victim blaming to me.

Given HN is the victim in that context, that would mean my statement is the literal opposite of victim blaming.


In this context the victim is Amos who was getting DDOSed, apparently in response to his critical articles about Go.


[flagged]


> Aren't you exactly the type of person pja is referring to, when he mentions "tad over-sensitive about their career choices"?

I don’t write Go in my day job (nor have I ever). So no, I’m not even remotely that type of person.

> The articles were perfectly fine. The problem you have with them is that you don't like the author's opinion.

It wasn’t the technical opinions I disagreed with, quite the opposite actually. I largely agreed with the technical points.

It was the flourishes they used to express those opinions.

Much like yourself now, he was rather rude and presumptuous about others. You’ve made several assumptions about me here that are wildly inaccurate and completely unwarranted. Am I being over-sensitive to your comments? No. I’m just calmly telling you you’re out of line making them. But it’s comments like yours that do lead to escalations in tone that usually end up in arguments, like those we’ve seen at the weekend.

> Your GitHub disagrees.

I’ve been writing software for more than 30 years and have used over 20 languages to varying degrees - at least a dozen professionally. I’ve even designed a few DSLs in my time too.

My GitHub profile is a relatively recent thing because “everyone was doing GitHub” so the vast majority of my code is on a private git server on my home server. I’m not inclined to open source every piece of code I’ve written and a lot of my earliest contributions to other open source projects do pre-date GitHub becoming mainstream.

In fact I’ve been doing this so long that I have a binder full of code print outs that I used to take to interviews. ;) (or maybe I threw that out a couple of house moves ago?)

While the vast majority of code I’ve published on GitHub is Go, that’s only because at the time of creating the GitHub profile I was considering going for a Go job (I didn’t in the end but the profile still helped my CV). There’s only really one project on GH that I actively maintain. The rest I just leave up because they’re there already.

One thing I’ve learned from my time is that all languages have their pain points. Rust included. And sometimes an academically worse language might be better suited for a specific task. So I tend not to make assumptions about people for their choice of language. ;)

Edit: worth adding that I used to be like you guys, being highly critical about languages. In the 90s I used to mock PHP as a stupid Perl. And in the 80s I was mocked by Assembly developers for writing in Pascal. I’ve grown up since then though. As I’m sure a lot of the Rust folks who mock Go eventually will do.


Except OP is not mature, when you make claim such as "You should not run Go in production" which the last 10years has shown the opposite, well you deserve all the heat.

Funny because he works for fly.io which explain all the Rust thing but doesn't fly.io use a lot of Go as well indirectly?


Where did he make the claim that you should unconditionally not run go in production?


In his last blog post.

https://fasterthanli.me/articles/lies-we-tell-ourselves-to-k...

"It may well be that Go is not adequate for production services unless your shop is literally made up of Go experts (Tailscale) or you have infinite money to spend on engineering costs (Google)."

You should tell that to the thousand of compagnies running Go just fine in production.


[flagged]


This is a ridiculously ungenerous interpretation of Amos's blogging motivations. What troll waits two years to stir the pot? There's absolutely no need to be inflammatory and I think his posting history demonstrates that he absolutely is not a troll.


It's rarely so deliberate. In fact it's basically never that deliberate because even if the intention were diabolically trollish (which is uncommon and I don't think the case here), HN itself is too random to respond reliably multiple times in a row.

People are almost always just posting things they like/agree with and then other people, with different tastes and backgrounds, react differently. Sometimes by bursting into flames.

  All of her fingertips burst into flames
  All she could say was, "Oh extinguish them, James!"

  (Robyn Hitchcock)


Please don't downrank that one, too. I'm genuinely sorry about the week y'all have had, but this comment thread is the /only/ one asking about "the controversy" (which is dead and gone - everyone has decided whether I had a point or was being a jerk), please just nuke comments asking about it rather than preventing discussions about the cost of cloud hosting, how attacks work, how to mitigate them, etc.

I certainly didn't DDoS myself just to "game the system". It happened, I learned a bunch of things, did an extensive post-mortem, a lot of folks are finding it interesting, that's it - no smart trolling here, just content. I certainly could use a break from "the discourse" as well.


This one is a great article and I don't see it as related to the other pieces (causally yes, but topically no). So, not a problem.

Actually we usually downweight rapid follow-ups from the same site when there has been a major thread just recently (the intention is always to avoid too much repetition along any axis) but I took a look at this article last night and it was so good that I ignored that rule. A good article is more important than a good rule.


Regardless of your intentions, the demeaning title and snark of that post is fuel for flamewars.

Given that it's not the first time, one could reasonably conclude that it was on purpose.

It's a net negative to HN and users are on their right to flag should this kind of content appear again.

Not to mention that contrasting Go to Rust is a tired subject of little service to the software engineering community given their small intersection in usecases.

edit: 2 instant downvotes. I stand by what I say. HN is valuable because of curation and flamebaits being historically frowned uppon.


And yet here you are responding to a post that is asking we keep this thread on topic by trying to rehash said flameware.


My comment has a point: it is directed to the author because they do not recognize the blatant flamebait appeal of their posts even after all the drama.

Your comment, not so much.


My comment was a suggestion as to why you might be receiving downvotes.

This thread isn’t the place to exercise your need to express your disapproval and/or suspicion of the author regarding their previous posts. If you’d like to have a direct conversation with the author, there’s plenty of contact info on their website. All you’re doing is dragging this post down into more useless argumentation about something that has already been covered again and again in the previous posts.


You're just making it worse by trying to gatekeep whatever others should comment. My original comment has a point and is on-topic for the thread regardless if you agree or not. You're just dragging this unproductive and offtopic conversation you started.

Please stop dragging this needlessly with ad-hominem attacks.


nit: ad-hominem is attacking the person rather than what they’re doing or their argument, e.g. saying that someone is an idiot rather than refuting their argument. There’s nothing ad-hominem in any of my arguments: I’m just pointing out that what you’re doing is unproductive and off-topic and giving my opinion about why you’re being downvoted.

You could and reasonably did say the same thing about my arguing with you (that it’s unproductive), but that wouldn’t be an ad-hominem attack. You’re attacking my actions, not my person.


Still pointless and off-topic. I should have just flagged your first message and move on.

We both made HN a little less great today and I feel bad.


I try to reserve flagging for comments and posts that flagrantly violate the HN norms, since they are likely to create work for the moderation team. I don’t feel like any of our comments in this thread rise to that level, which is why I didn’t flag (or even downvote!) your original comment or any of your comments in this thread, despite its running afoul of the HN guidelines (assume good intent, don’t engage in flamebait or off-topic controversy, don’t comment about votes), although others seem to have.


Thank you, that makes sense! No idea why you got flagged. I found your comment helpful.


I don't think the flagging has anything to do with the Golang community. From my understanding, that came from the HN moderators themselves. @dang made some explanations on one of those threads


The flags came from users. That's almost always the case (see https://news.ycombinator.com/newsfaq.html).


Not sure what happened to my comment but I guess you hid it dang? Thanks! It wasn't my intention to reignite the whole thing.


A user marked it offtopic, which downweighted it. A few longstanding HN users have the ability to do this, as part of an experiment we've been running. I've written about it in a few places:

https://news.ycombinator.com/item?id=31228950

https://news.ycombinator.com/item?id=31227642

https://news.ycombinator.com/item?id=31227584

https://news.ycombinator.com/item?id=31117569

The downweighting is intended for when overly generic or offtopic subthreads are stuck at the top of a page, choking out more interesting or on-topic conversation.

(Unfortunately it caused some trouble a while back because one such downweighted subthread was critical of a YC startup, and a much more important rule is that we moderate HN less when YC or YC startups are the topic. But it was a straightforward mistake—the user who marked the thread generic just didn't realize that the topic was YC related.)

Since it's an experiment, we want to keep an eye on it. Currently I have the software send an email each time a downweight is applied, so we can review them as part of going through the regular HN inbox. When I saw this one, it was such a perfect example that I came to the thread and auto-collapsed it as well. So that's its current status: downweighted and auto-collapsed.

Btw, basically none of these situations get created intentionally. They're a tragedy-of-the-commons thing where each person does what they do innocently but it all adds up to something suboptimal. That's pretty much what moderation exists for: to jiggle the system when it gets stuck in one of its failure modes.


Oh, I missed that! Didn't realize the article was considered so inflammatory. I guess I'm desensitized :)

Reading what dang wrote makes it seem really sensible.

Links to dang's comments:

https://news.ycombinator.com/item?id=31207191

https://news.ycombinator.com/item?id=31207126


[flagged]


What are you using to generate the traffic? It would be cool to see what's needed on the other end of the pipe.


[flagged]


Just so I understand, you decided to protest this pricing not by boycotting the service, or making a public statement about it, but instead by targeting an employee's use of the platform with an attack? I'm sorry, I really don't get why you did this or why you thought it would be effective.


Why were some only some requests Range requests? (the status codes included 206 and 200)


I think whichever PoP india gets served by was very overloaded, resulting in a lot of timeouts, so when wget retries (I set infinite retry attempts), it will do a range request from wherever it left off


How are you connecting from so many different IP addresses?


sammy811 is claiming responsibility for the small attack on the video platform.

Nobody's claimed responsibility for the main attack, but the answer is probably "mostly Tor + a bunch of open proxies". There's probably easily available databases of those available somewhere?


Tor's exit relays, like all non-bridge relays, are listed publicly [0] (by necessity, that's how the client chooses a circuit). You can see in the screenshot of whatever you're using to monitor by GeoIP that Tor traffic already gets counted separately. Of the request counts visible, it's making up at most ~6% of them, which makes sense -- Tor is a pretty poor platform to launch a DDoS from, since the relays are already decently saturated, and you're eating the cost from the circuit building (negligible in the normal web browsing case, but quickly adds up when you're trying to create as many TCP streams as you can).

[0] https://metrics.torproject.org/rs.html#search/flag:exit


Why was the site breaking under such a little stress? For requests per second, that is really not that much.


The article covers this in great detail. Do a search for "fair enough" to jump to the relevant section.


(embarrassing, thanks!)


It's... explained in the article?


The code in this article is great advertisement for Golang.

Pin<Box<TimeoutWriter<TimeoutReader<TcpStream>>>>? Gesundheit indeed.

Great writing as always.


Not that great. The first outer layer here is pining. It's been discussed for some time now but not yet available in go. It may hit 1.19 maybe... (https://github.com/golang/go/issues/46787)


34 million requests really isn't many.

Bigger sites might handle that number of requests every second. Hand coded and highly optimized services can handle that number of cached small requests every minute on one machine. After all, that's only an egress rate of a few GBits. And your homepage certainly ought to be both cached and small.


> Bigger sites might handle that number of requests every second.

I can think of maybe 10 sites that get anywhere near that amount of traffic and they have millions and millions of dollars of infrastructure behind them. 34 million requests per second is an absurd amount! Nobody even knew how to handle that level of traffic 10 years ago.


I take it you didn't read the article? Because that was the author's conclusion, and caching content for logged out users was the main solution.


Depends on the request. 34 million calls to a static page isn't much, even for the smallest VPS. 34 million requests to a data-heavy page with dynamically generated and hard to cache content can be a lot.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: