I find it somewhat chilling how their original colo left them to hang and dry.
Maybe I'm weird, but I'd consider colo to be a closer cooperation than just renting some virtual servers from wherever. And just getting told "Yupp, your null-routed. No, we can't give you access for specific sources over a different path. Get fucked" - or, in fact, not getting told that - is ... one of our ex-hosters was like that.
And as a service provider, I have strong feelings about the customer service there.
Maybe I don't know big infrastructures, but this just leaves me with a weird feeling in my guts.
But hell. Make sure to give your engineers - and their family - something. After some hell-weeks, we've given people some budget to do something fun with their family, because the company had to take so much private time during those weeks.
Corporations gonna corporate. And when they merge and buy each other, information disappears. People get fired or leave, dropping relationships.
I had a ten year colo relationship with AT&T, mostly but not entirely datacenters, where the salescritter would be changed every single time I tried to talk to them. 3-4x a year, they either left or were re-org'd elsewhere or our account was moved to a different group. (Datacenter techs: competent. Datacenter manager: competent. Sales staff: chaotic.) Eventually I learned to open a ticket to find out who the salescritter would be.
On the other hand, there was a competent server salesperson at company X who I used to buy lots of stuff from; company X started raising prices and flubbing tech support returns, so on my next order I tried company Y. I was rather surprised to get the same server salesperson -- who had just taken a job at Y.
Depends on the contract and the attack size. Sometimes the DC has to pick all it's other customers over trying to handle a DDoS for one. Our DC had an issue where packets over 1492bytes were being dropped in Chicago by one transit provider and that took 3 hours to make the call to drop them.
being null routed is really the only thing they can do. Then then undid it and they had to do it again. This wasn't a standard DDOS attack, which they normally handle just fine.
Subscribed and the unsubscribed shortly after when they mistook Greenland for Iceland (42m55s), Italy for ??? (43m42s) and then Equatorial Guinea for Chad (44m55s).
But, even if you only have a single action available, isn't communication with the customer key? We've pulled customer systems offline, and parts of the customer were glad because we could provide information and help with handling harsh situations they had.
And even if this isn't a 1% revenue customer, if they are receiving a DDOS-whopper you cannot happen, that would probably be a customer you want communication with. Even if the goal was just to figure out if there was glory in hosting them or to just fire them.
Just null-routing and being unavailable seems unprofessional, though I know quite well how .. well-structured these acquire-everything hosting providers are internally. Doesn't change that this is bad customer service.
Looks like Cloudflare did change their minds later and offered to mitigate the attack pro bono:
> Following our initial quote from CloudFlare, we understand that some CloudFlare employees undertook a grassroots effort internally to convince the leadership to sponsor our needs, and eventually CloudFlare came back to us with an offer to sponsor our services for us free of charge. This was a very generous offer for which we are very appreciative; in the end we did not take them up on it as we had made substantial inroads towards an alternative solution by that time. I have had my reservations about CloudFlare in the past, but they were there for us in a time of need and I am grateful for that.
Typical "corporate pricing"--they offer a really high price they'll expect you'll negotiate downwards to something reasonable. The Sourcehut negotiators probably never dealt with this kind of "sales model" before.
That said, what will happen when more companies publish their experiences with "enterprise sales"? There's an article from HEY[1] about how broken the sales process is. To get a quote, you normally have to endure 2 or 3 zoom calls before the price is unveiled.
There's probably room for an innovator to fix all of this.
We did negotiate them down a bit but we didn't feel that we could come to an agreement within our budget and decided to move on. Apparently this was an excellent negotiation tactic because they came back with an offer of $0!
It’s not broken per se. Charging different customers different prices based on their willingness to pay maximizes profits. Heck even Sourcehut does it, but they don’t require interacting with sales because at their pricing they don’t have enough margin for a sales team.
> As unfortunate as these events were, we welcome opportunities to stress-test our emergency procedures;
This right here is invaluable and something you only get from experience. Planning and theory only get you so far.
I extend this thinking to deploying large infrastructure changes you've never done before - you can only plan so much before pulling the trigger and just doing it and seeing what happens.
Wikimedia's operations team go through a full-datacenter failover regularly. That is, once every 6 months or so. It takes several hours of intense all-hands-on-deck operations. They do this repeatedly in order to be sure that all of the procedures are practiced and documentation is well maintained.
I've been using Sourcehut for a couple years now. One thing this outage taught me about the service that I didn't know is that Mercurial (hg) is community maintained:
> We also did our best with hg.sr.ht, but it is community maintained
It looks like git.sr.ht is hosted on OVH in France, while hg.sr.ht is hosted on High5! in the Netherlands.
It's not entirely clear to me how this affects their product roadmap or support, but definitely good to know.
This also came as a surprise to me! Not only that but:
> restoring service was delayed until we could get the community maintainer, Ludovic Chabant, online to help
Maintainer, singular!
The only reason i use Sourcehut, and the main reason i pay for it, is because i stubbornly still use Mercurial, and want first-class support for it. With the utmost of respect to M. Chabant, that is not exactly first-class.
hg.sr.ht would not exist if not for being community maintained; the cost/benefit ratio does not work out in its favor were we to maintain it internally. The deal is that we provide infrastructure and operations but that the software itself is maintained by the community that needs it. I think this is an advantage of SourceHut's free software model and ethic that allows people to build what they need and to get its infrastructure needs met in a way that wouldn't be possible, for example, on GitHub.
> It looks like git.sr.ht is hosted on OVH in France
They explain it here:
> However, we found that OVH’s anti-DDoS protections were likely suitable: they are effective, and their cost is amortized across all OVH users, and therefore of marginal cost to us. To this end the network solution we deployed involved setting up an OVH box to NAT traffic through OVH’s DDoS-resistant network and direct it to our (secret) production subnet in AMS
That's such an odd choice for this type of infra. I've had horrendous experiences with OVH in the past and what even worse, terrible customer service. Yes, this was about 8 years ago, and not with France based metal, but still...
Being that this is Drew, I wouldn't be shocked to know that this provider choice has more to do with an anti-establishment manifesto than any practicality. Then again, I might be wrong.
Well, it's certainly better than their last provider who they couldn't reach during a critical time, and still cannot reasonably communicate with.
They can at least reach and reason with OVH, as mentioned when they got flagged as an out bound DDoS.
> Being that this is Drew, I wouldn't be shocked to know that this provider choice has more to do with a anti-establishment manifesto than any practicality
I feel this is a pretty unfair barb considering one of their first moves was reaching out to Cloudflare. Unfortunately, non-http traffic + the need for tls termination on their own servers (pretty sure cloudflare calls this Keyless SSL) squarely lands them as an enterprise customer w/ enterprise pricing.
Drew probably had already entered into agreements with OVH when cloudflare came back around, and we don't have insight on the terms or period for which Cloudflare's second offer was good for.
> Being that this is Drew, I wouldn't be shocked to know that this provider choice has more to do with a anti-establishment manifesto than any practicality
Not wanting your traffic MITM'd is anti-establishment. That's where we're at LOL.
hg.sr.ht is operated by SourceHut, but the software is maintained by the community. Ludovic is the primary maintainer and various other Mercurial users participate in its development.
For me there a bit of a language barrier with the terminology. After reading the sentences about hg.sr.ht and community maintenance it seems that some notable meaning is being conveyed about what that means for the operation of the service but its one I'm not smart enough to understand.
I appreciate the service though so I hope the differences between maintained and operated doesn't mean anything in the long term.
Mostly curious about the k8s plans. From some past posts it seems that the team was strongly against employing containerization [0]. However, it seems something changed. If anyone has more info about this if love to hear more.
I am really curious if the DDOS tried to follow them to the new infra and failed to cause an outage or not. Apparently the perpetrator noticed when they got Cogent to narrow the null route, but the blog post notes they still can't access the original subnet in that datacenter. Are they still trying to knock Sourcehut offline? Is the DDOS still pointing at now deprecated infra for some reason?
> At about 06:30 UTC the following morning, the DDoS escalated and broadened its targets to include other parts of our PHL subnet. In response, our colocation provider null routed our subnet once again. This subnet has been unreachable ever since.
Right, that's expanding to the rest of the subnet in their old DC. They've since migrated to the new DC with new countermeasures. Did the DDOS follow and the countermeasures are working? Or if it didn't follow, why not?
There's also the question of whether the DDOS is still even trying the old infrastructure. The post says it's unreachable, but that would be true if the null route hadn't been removed yet.
Yes, the DDoS followed us to networks with countermeasures, and yes, the countermeasures worked. We don't want to disclose too much about that, though.
They’re on OVH now and should have protection from it by virtue of being on their network now.
> However, we found that OVH’s anti-DDoS protections were likely suitable: they are effective, and their cost is amortized across all OVH users, and therefore of marginal cost to us. To this end the network solution we deployed involved setting up an OVH box to NAT traffic through OVH’s DDoS-resistant network and direct it to our (secret) production subnet in AMS; this met our needs for end-to-end encryption as well as service over arbitrary TCP protocols.
I'd consider it mostly protected, because no their servers are not on OVH, just a single box performing front-facing NAT/proxy essentially. The attacker now just needs to find the "secret" production subnet and attack it directly instead of through the front-facing NAT addresses.
That is very easy to mitigate, because you null route the production subnet except for a VPN that only can be reached by the proxy. You can even VPN over a completely different IPv6 route.
They could still try to knock your entire datacenter offline but that is much harder.
That's not much harder, that's exactly what happened to them in the initial attack.
You're still depending on either a "secret" IPv6 network, or your upstream provider performing some source-based routing to only route packets from the VPN connection. I doubt that's available to a simple colo customer.
Would have liked to know what the difference was in response between Cogent and Level3. Did only Cogent respond at all, or was Cogent the one handling all their IPv4 space?
I admire Drew's work and SourceHut, but I wonder if choosing colocation instead of a cloud provider, and not adopting modern cloud tooling hurt the recovery time.
The article mentions how setting up SourceHut from scratch is a complex undertaking with hundreds of small tasks. Some of those are understandable, especially given the amount of data they surely handle, but setting up a complete environment and restoring from a backup should be a simple and mostly automated procedure, not a gargantuan undertaking that needs to involve the entire team several days. There are always difficulties when production is down, and you're trying to restore a full system while undergoing a DDoS attack, I get that, but the reason we have modern cloud providers and tooling is to make creating new environments as painless as possible. It seems foolish not to take advantage of that.
I'm not a fan of Kubernetes either, but it's good that they're experimenting with it. Hopefully it leads to quicker deployments if this happens again.
Cloud isn't a dependency to accomplish this. I can stand up new instances of our primary infrastructure within minutes on any VM or server running linux and our agent.
This sounds like a case of treating your infra as pets and getting stuck when it suddenly needs to be replaced.
On a more serious note, I’m really wondering about the motivations. I see the following hypothesis:
1) Test/demonstration of a DDOS against a random target.
2) Attack against a project hosted on sourcehut to make it unavailable (there was even the speculation of disabling a master repository so an end-user could not check that his own local version was the correct one, thus using it with a security hole or a trojan)
3) Attack against a page hosted on sourcehut (I joke that someone wrote "Putin = Fag" on his sourcehut hosted blog).
I'm actually serious there - school/university holidays invariably bring with them a shitload of fairly random DDoS attacks aimed at ... whoever the attacker feels like.
I don't understand and I don't know where they get that volume of packets from, but I've seen it happen a quite sufficient number of times over the years to consider it as a possibility here.
(excuse the vagueness; if you're ever at the same conference bar as me you're welcome to follow me outside when I go for a smoke and ask for the full story then)
5) Somebody very vehemently disagreeing with Drew's (often radical) views. This could also be a simple transphobe, though I expect those people to have far juicier targets to attack.
I’m following Drew’s blog and don’t remember anything particular about trans rights (except some "I condemn those attacking LGBT’s rights", which is not particularly radical). Also, the attack (which had an obvious cost) would have no impact regarding this cause.
No it wasn't. The outcome is due to major networks being shite. Not accommodating newer technologies and gate keeping services to resolve DDoS attacks.
All major network upstreams could do so much more to make the net more reliable and resilient to small ISP. Myself included.
peer neutral networking, not having tons upon tons of e-waste prone to botnet behaviour, it wouldn't be like this.
I stopped paying for sourcehut because his opinions are relevant here since he bans types of projects based on them. You never know when another restriction will be added.
If you're talking about banning cryptocurrency and blockchain projects, personally that earned some favor in my eyes. I'm happy to use and pay for a service that doesn't contribute to that blight.
I also do not like "crypto", but I do not think this type of restriction is great on a paid service. Maybe, maybe it could be argued for public repositories. Or if it was free. But like, why does Drew DeVault care that I would have a private repository with "explicit sexual content"? On an account I pay for?
And even if you agree with the current set of restrictions, are you sure it will not be further expanded? I am not.
Having dealt with this in the past, once you let cryptocurrency people onto a service they will stop at nothing to abuse shared resources for whatever mining fad is currently underway. As an operator, you can either hire staff for a full-time whack-a-mole game, get into arguments with customers about whether what they're doing is shitty even though it meets some pedantic interpretation of policy, or just ban the whole crowd and focus on customers who don't suck.
As for banning sex content on a paid service, you'll find it's more common than you think, since payment processors tend to drop customers who permit that sort of thing. Porn-enabled services have chargeback and failed-charge rates orders of magnitude higher than services which forbid them.
There are a ton of reason to fire a client. The two mentioned here are completely uncontroversial from a business standpoint.
"We do not allow content or activity on GitHub that: ... is sexually obscene or relates to sexual exploitation or abuse, including of minors".
Atlassian's AUP at https://www.atlassian.com/legal/acceptable-use-policy says "Inappropriate content" includes "Posting, uploading, sharing, submitting, or otherwise providing content that ... Is deceptive, fraudulent, illegal, obscene, defamatory, libelous, threatening, harmful to minors, pornographic (including child pornography, which we will remove and report to law enforcement, including the National Center for Missing and Exploited Children), indecent, harassing, hateful"?
GitLab's AUP at https://handbook.gitlab.com/handbook/legal/acceptable-use-po... says "unacceptable use of our services [which] applies to all users of all GitLab services including those on the Free, Premium, and Ultimate GitLab tiers" mean "you must not: Create, upload, submit, execute, transmit, or host anything that ... is vulgar, obscene, or pornographic, or gratuitously depicts or glorifies violence."
Now, there are differences between "explicit sexual content", "sexually obscene" and "pornographic", but if you are worried about possible further expansion, you shouldn't use any of these code hosting services.
I was conjecturing it was the same reason as the other hosting providers, not saying that was the same or that I had special insight.
Instead, I was pointing out that since all the providers I looked at have essentially the same restriction, you likely shouldn't use any of them. Certainly there are a lot of people who use GitHub despite having no guarantee the ToS won't be more restrictive in the future.
Sourcehut's ToS is certainly not exceptional in that regard, so really you are objecting to essentially every 3rd party code hosting provider, yes?
Or is there one you had in mind where you aren't concerned about further expansion?
It's hard to find a payment processor for pornographic providers. Existing payment processors are likely to stop supporting you if you become a porn provider. Additionally, there are branding risks in being associated with adult content. There's also more legal scrutiny involved, and it's outright illegal in some jurisdictions.
A simple Google search on the topic should be educational.
I like when people bring their values when they do business. Especially when those values are more than “make money”, and expressed in more ways than product design.
You know, it's fair not to support the service on that principle,
However, Sourcehut is actually FOSS software.
IE: if you wanted to run one of their banned things, you could, just on your own hardware.
It's fine, in my opinion, to moderate your services if people have an escape hatch to get out of your service if you require them to move along.
This is a far cry from services such as GitHub, or even Gitlab (with their open core) as transferring to your own system is actually possible, though not without some relative pain.
I don't like crypto projects, so of course I am biased here. But if you like free speech then there's not many options and I think sr.ht is the best one (especially if you plan to self-host).
GitHub is well known to be controlling of speech and even championed some measures that affected the entire industry, and as others have mentioned they have restricted projects on a relatively arbitrary basis. Sometimes even due to geographic region.
All services have permissible use clause in their terms & conditions. And if this is about cryptocurrencies and blockchain. It's absolutely a valid choice for a small service provider to block them. They are resource hungry and resource abusive. A small provider like SourceHut can't afford to take that unnecessary stress. VC funded or big players can. And they do that for easy, quick & short boost in money or marketing PR.
Also, I always have observed this. This reminds me. Not targeted at gray_-_wolf but a general observation which I just remembered...
There is a trend where people attack small actors/entities for smaller mistakes or opinions. But give free card to big players you cannot touch for atrocious BS. Because they are monopolies or filty rich.
Especially in tech, you don't talk shit about google or ms a lot publicly. Cos that makes you less hirable.
The reason I don’t use srht is because of his opinions about product development (of srht itself), not his personal opinions.
Social/collaboration features are explicitly deprioritized by design; I think this is a natural consequence of srht being built by and for lone wolf developers. GitHub and Gitea (which is basically a github clone) seem much more geared toward collaboration by groups, something most small-time
f/oss developers don’t need.
Also, the emphasis on email and irc is bad, imo. The web won because it is better. A lot of the anti-web stuff is just tradition.
Everyone has an e-mail account. That means if you want to contribute to a project on SourceHut, you don’t need to create an account there.
Also, I hate when I’m looking for useful forks of something on GitHub and have to sift through tens of useless forks that were created just to be able to submit a pull request.
As someone who was there in the early days, who joined the chorus of people warning Drew about the effects of such a policy, I just want to say that Forgejo is a treat to self-host and use. Gitea is now open-core, and its future is unclear.
This really comes down to the intended workflow. By design, SourceHut aims to provide the Linux kernel development model to a wider audience (with extra features beyond mail and Git). It is a very different collaboration model than the likes of GitHub and its peers. I summarize the comparison of the two as "to each their own"; I'm okay with both models and see the merits of both, but my preferences and willingness or ability to work with a given model won't always line up with contributors.
I also self-host Forgejo in my homelab and really enjoy it.
The only mention I can find that gitea is open core comes from forgejo. Do you have some kind of proof that there are parts of gitea that are not MIT licensed?
Gitea Ltd's stance seems to be that it does "custom development" support contracts.[0] It may be a matter of perspective whether you consider this "open-core" or "contract work."
See also their clarifications on Gitea the company[1]:
> Gitea Ltd. will be open to building special versions for special clients and will contribute any features back to the main repository when possible
This was in a followup to the original announcement.[2]
Forgejo (i.e. Codeberg, a FOSS non-profit) maintains that the project should be led by the community, not a company[3]:
> Sadly, Gitea Ltd broke that trust by a lack of transparency: its existence was kept a secret during months. After the initial announcement, Gitea Ltd published another blog post but it was still vague and there has been no other communication since. Who are the Gitea Ltd shareholders? Who, among the Gitea maintainers, are employees of Gitea Ltd?
That's so true, but I agree with sneak here (did I just write that?). If my code is on GitHub or GitLab or Gitea or whatever, and I want to work on it with a friend, I can invite them to join me on a website using a workflow similar to 1,000 other not-source-code-related collaboration tools. It's damn near impossible to talk someone into joining an email-based process unless that's something they've already been doing elsewhere. Look at the git-send-email docs[0] which talk about configuring SMTP auth. Followup question from the new person I'd be trying to rope in: "I dunno, my work uses Outlook. What's SMTP?"
If someone contended that SourceHut optimizes for devs who've been writing Linux kernel code for 25 years, so you weed out all the newbs and can get the hardened veterans involved in your project, I could buy that. I'd disagree that it's what I'd want for my project, but to each their own. I couldn't recommend it as an alternative to other services that require participants to know how to use a web browser.
Once you learn the git-send-email flow, it is a lot better, especially for distributed development.
With the PR flow, people need to sign up to the website, create a fork, clone the repo, make their changes, go into a slow web ui etc. It mostly works because everyone is on Github. However, even that solution sucks if you are having a polyrepo setup and need to make changes in many places.
For bazaar style development where you accept contributions from anyone and don't use Github, the email flow is so much faster and simpler. Yes, you need to set it up once. But the other day I contributed to a open source project that was self-hosted, and it's amazing that I just can clone the repo, make my changes, commit and then git-send-email, bam done. Had I needed to sign up and create an account, set up a fork, I probably wouldn't have bothered because it was a small contribution. However no need to register to a website, no need to click through a slow ui, no need to create a fork, it reduces the ritual to make contributions by quite a lot, given that you've set it up.
I am glad that there is a good alternative that supports this flow, because I think it is superior. There are a ton of alternatives if you want the PR flow (Gitlab, Gitea, Github, Codeberg).
I’ve done the email workflow for a bit. I’ll say this much: it might be comparable to configuring a power editor vs. using some powerful and ready-to-go IDE. You can set up things how you like and the preferences of everyone else doesn’t really matter. You can also just edit anything because it’s fast and there is probably a good enough configuration for all kinds of languages and modes.
But in some ways it isn’t. Like any fool (like me) can just get some Emacs configuration for free from others. There doesn’t seem to be that kind of sharing for all the fiddly little things you need to do with git-send-email and the rest. All I’ve heard so far is that, oh yeah I usually deal with this specific issue by running some Perl scripts that I wrote eight years ago and that I’ve been nurturing ever since. But it wouldn’t be very useful for you because it’s very, very idiosyncratic. Might not even work outside Debian and my Apt state...
> But in some ways it isn’t. Like any fool (like me) can just get some Emacs configuration for free from others.
To be clear though: there are programs and tools beyond the git(1) tools themselves that help you with email workflows. Like b4 which is for the Linux workflow.
A few plugins to allow git-send-email to work without much configuration by using your existing Outlook or Gmail or Thunderbird setup (basically, a git plugin for outlook) would be very desirable.
That might be intractable. Some clients will rewrite/corrupt (whatever you want to call it) messages before sending. Like maybe it reflows paragraphs and ruins your inlined diff. Or it helpfully replaces tabs with spaces (the project requires tabs).
So what can you do with that? Doesn’t seem like you can do anything.
Outlook seems especially bad. [Here] is some HN hearsay about how even Microsoft had to run Linux machines in order to contribute to the Linux project.
> Social/collaboration features are explicitly deprioritized by design; I think this is a natural consequence of srht being built by and for lone wolf developers.
It's not the case that SourceHut treats collaboration features as out-of-scope. Drew seems to draw a bright line between social and collaboration features, and intends for SourceHut to offer solid collaboration features, but without any hint of trying to be a social network.
From an old comment of Drew's: (edit: I now see Drew's comment was replying to you yourself)
> SourceHut is designed to facilitate collaboration, of course, but it's done differently from platforms like GitHub and those that seek to emulate it. And of course it is more than a git frontend, providing tools specifically to facilitate collaboration such as mailing lists and bug trackers. SourceHut is an engineering tool, not a social network. It is designed to get your work done and then get out of your way.
> GitHub is explicitly designed like a social network, and this is a design that we reject.
I adore Gitea. 99% of the stuff I keep there is private code, where Gitea is basically an SSH-able Git remote. However, I occasionally want to share a project with a friend, and then it's trivially easy to invite them to collaborate with me using the same infrastructure I was already using.
Minus that last part, I'd just stick with plain Git. It's everything I need for my own personal, only-for-me projects.
Are they deprioritized (spelling dunno)? Or are they just different in a way which you judge as being not-conducive to collaboration? (I mean you mention mailing lists.)
There’s not really much need for a “forge” without collaboration. I wouldn’t pay the price of SourceHut just so that I can fetch and whatever between my machines. That’s like a pricey sneaker net.
Per my reply to sneak's comment, collaboration features are part of SourceHut's goals, but its approach is very different from that of GitHub.
If all you want is a lightweight web UI for a git repository, git's official GitWeb project offers this. SourceHut isn't just an alternative to GitWeb.
Maybe I'm weird, but I'd consider colo to be a closer cooperation than just renting some virtual servers from wherever. And just getting told "Yupp, your null-routed. No, we can't give you access for specific sources over a different path. Get fucked" - or, in fact, not getting told that - is ... one of our ex-hosters was like that.
And as a service provider, I have strong feelings about the customer service there.
Maybe I don't know big infrastructures, but this just leaves me with a weird feeling in my guts.
But hell. Make sure to give your engineers - and their family - something. After some hell-weeks, we've given people some budget to do something fun with their family, because the company had to take so much private time during those weeks.